Turn assumptions into hypothesis

For value-based design process

Richard Simms

· 4,750 views

Turn assumptions into hypothesis article image

It's a good habit to be curious about all aspects of the product. To ask five why’s, we all have unfounded beliefs and this can often course us to miss the obvious or take the un-validated assumption for granted which may kick us in the arse later.

There are different types of assumptions. Those that are core assumptions, which is an assumption that must be true for your solution to work. Unknown assumptions, that must be understood to reduce possible risk. Risky assumptions that if proven wrong would cause the project to fail.

Assumptions may start as naive statements, but it’s this venerability that will help us form a good question. We can turn these assumptions into questions through a technique from IDEO called How Might We… To phases the questions in an open-ended statement to avoid stating a solution. These are opportunities that should align back to the desired outcome and goal as defined by the product vision.

Stand back from questions and you will start to see similarities and connections. By taking a 10,000 feet view of your assumptions, you will start to have ideas to combine the How Might We’s and form potential solutions and organise them into groups to review. It’s especially helpful if you can do this with your team and gain input for more peoples point of view. When it comes to choosing which question to tackle first, think about what is the riskiest assumption that would derail the whole project along side those that will have the greatest impact you’re product the most or what would bring the most value.

After the prioritisation, it’s time to combine solutions with the How Might We question into a hypothesis. A hypothesis is a framework to clearly define the question, audience, solution and to eliminate the assumption. There are different mixes of hypotheses; from building or prototyping software, services or other actions that are not software-related. It’s important to break all of your hypotheses down into more specific, actionable-hypotheses that can be tracked in your project, but you may decide to separate your non software-related hypotheses and track them separately.

The formate of an actionable-hypothesis follows these four steps:

We believe that doing, building, or creating this for these people will result in this outcome and we will know we’re right when we see this metric

Next up is developing an experiment so you can test your hypothesis. Our test will follow the scientific methods, so it’s subject to collecting empirical and measurable evidence to obtain new knowledge. In other words, it’s crucial to have a measurable outcome for the hypothesis so we can determine whether it has succeeded or failed.

There are different experiment that you can run to validate your hypothesis from qualitative methods like interviews, landing page validation and usability testing etc. To quantitative data from surveys or analytics. Define what the experiment will be, and the outcomes that determine if the hypothesis is valid. A well-defined experiment should validate or invalidate the hypothesis.

After defining the experiment, it’s time to think about design. The trap that people often fall into is overly designing the experiment and thinking about too many scenarios. At this point you don’t need to have every detail thought through, rather focus on designing just what is needed to be tested. It needs to just enough design to be believable but no more. Once the hypothesis has been proven can the polish be applied.

Hypothesis-driven experimentation will give you insight into your visitors' behaviours. These insights will generate additional questions about your visitors and their experience—drive an iterative learning process.

If you just learned that the result was positive and you maybe excited to roll out the feature. That’s great! But did you learn anything that would make the solution better? If the hypothesis failed, don’t worry—you’ll have gained insights from the experiment to apply to the next. Through the rigger of each experiment you run, you’ll learn something new about your product and your customers.

I don’t expect any of this to be new to you. You probably know all of this already, but perhaps haven’t systematically implemented it into your product. I would love for this to be the norm and to help those without a solid discovery strategy.

What do you reckon?

Tags

  • Value-based design
  • Communication & Collaboration

Contact

Questions or need more details? Ping me via email , or any of my other social media links.

Newsletter

Get personal updates and readings on topics like product, design, productivity, programming, and more!

Join the 123 other readers.