Experimental Mind: even great ideas are hard to scale

Your weekly roundup of interesting reads around experimentation is back from a short break to spend some with family. Enjoy the reads!

ps. final call for joining Ronny Kohavi’s new class, where he will teach you everything he knows about A/B testing in 5 two hour sessions. First session is May 31. Subscribe via this link to receive a $50 Amazon gift card.


🔎 What I’ve been reading & watching

Why it’s so hard to scale a great idea

Why do some products, companies, and social programs thrive as they grow while others peter out? According to John List, there are five causes:

  1. Inaccurately interpreting a piece of evidence or data;
  2. Biased representativeness of population;
  3. Non-negotiables that can’t grow or be replicated;
  4. Negative spillovers, or unintended outcomes; and
  5. Cost traps.

Here, he explains and offers examples of each cause, as well as how to anticipate or avoid them.

Experimentation Culture Awards 2022
Experimentation Culture Awards 2022experimentationcultureawards.com

I’ll be one of the jury members at the 2022 Experimentation Culture Awards. Join the award session and listen to inspiration stories.

Ben Labay shares his thoughts about good metrics for a CRO/Experimentation program:

Most of your A/B tests will fail (and that's OK)

Latest post by Oliver Palmer:

The vast majority of experiments don’t do anything at all, so make sure you plan, budget and manage expectations accordingly

Statistical significance clearly explained
Statistical significance clearly explainedguessthetest.com

Deborah O’Malley and Timothy Chan wrote an article that tries to explain core statistical concepts in plain English. I think they succeeded, although there was some discussion about the correct definition of a p-value.

Building the future of A/B testing at carwow
Our product teams are filled with brilliant people, aligned to a world class product development strategy, however we do not always know what is best for our customers. A humbling moment in every…

Frequentist and Bayesian frameworks lead to identical inferences

There’s an ongoing stats holy war in experimentation between Frequentists and Bayesians. Joshua Hanson:

… ran a simulation of 25k experiments. For each of these simulations, I recorded the p-value from a one-sided t-test and the Bayesian probability that the variant is at least as good as the control. The results are plotted below, making it very easy to see that the Bayesian probability is perfectly predictive of the p-value. 

Manuel Da Costa: Where do you see #experimentation and #CRO in 5 years time?

Some fun predictions in this thread. What’s your predictions?

🚀 Job opportunities

All featured roles:

Also checkout the job board for more open roles in the field of experimentation, CRO and analytics.

📅 Upcoming events

Upcoming events in the next months:

See the full overview of events to find 15+ other events and conferences for you and your team.

💬 Quote of the week

CEO of Spotify is talking experimentation in their earnings call:

“At Spotify, we are constantly testing and experimenting. And in Q1 alone, we ran almost 2,000 experiments, which is a 5% increase over the previous quarter. Some of those experiments led to full global product launches like the new updates and campaigns we rolled out for Blend, which drove 17x more new user registration than even our annual rap campaign. And in the first 20 days of the Blend campaign, we had 22 million users create Blend playlists. And we are also seeing incredible user engagement worldwide with over 60% of streams coming from Gen Z listeners on Blend. And these results are exactly the types of outcomes we aim to drive and we will continue to aggressively experiment with further user improvements.” — Daniel Ek, CEO Spotify in their Q1 2022 earnings call (source)

☕ Thanks for reading

If you’re enjoying the Experimental Mind newsletter and want to say thank you? Buy me a coffee (or a beer). Or share this newsletter with a friend or colleague.

Have a great week — and keep experimenting.

Kevin