Don’t just pay lip service to running experiments

Startups are experiment machines. In the early days you’re testing hypotheses, testing market demand, testing pricing, everything is an experiment. The problem is that most startups don’t approach or think about the experiment they are running with any degree of rigor.

Founders tend to think a failed experiment is one that haven’t given the desired result. This is wrong. Any experiment that gives an accurate result, regardless of whether we like it or not, is a successful experiment. The only failed experiment is one that hasn’t given an accurate result.

Rigor is key to running a successful experiment. Lack of rigor leads to a host of problems, such as running an experiment for too long because we don’t like the answer, silently changing the goal posts or searching for data to fit our idea rather than letting the data shape our idea.

Introducing a Base Level of Rigor

Write down the hypothesis you are trying to prove and the constraints of the experiment (how much time, effort and cash are you willing to spend proving it?).

What results will indicate that your hypothesis was proven and what results will show that it was disproven?

If the experiment produces a result that lies between the two, then it’s a failed experiment that either needs to be extended or abandoned.

Example: Experiment to test market demand using Google Adwords

Hypothesis: There is a market demand for my product that can be reached through Google Adwords

Constraints:  €500 and two weeks

Proven: 20+ sign ups

Disproven: Fewer than 5 sign ups

If the experiment results in between 5 and 20 sign ups then we don’t have a result. We can extend the experiment or decide that it’s no longer worth it.

Startups need to be fluid and it is expected that you will need to change your criteria during or even after an experiment, however if you ensure those criteria have been written down then at least when you change them you will do it knowingly. Not doing so guarantees over/under investing in experiments, sand shifting and unclear communication with stakeholders,  and it opens you up to a whole host of cognitive bias.

Don’t just pay lip service to running experiments – actually run experiments. WRITE IT DOWN!

Leave a Reply

Please Login to comment
Notify of