The importance of Hypothesis and Prioritisation when A/B Testing

Elise Maile
5 min readMar 26, 2018

So, you’ve analysed your funnel, run workshops, and scoped out your competitors and now you have a list of ideas you want to A/B test, but how to you work out which to run first? Is it viable to the business? Can it even be built? Do any other teams need to be involved (Marketing? UX designers? Engineers?) This is where ensuring you have a test hypothesis makes it much easier to prioritise a testing roadmap for your business.

Firstly, what is a test hypothesis? Much like a science experiment, a hypothesis is the process of laying out exactly what you want to test and why, ideally based on existing evidence or data. Every test must have a hypothesis in order to understand the motivation behind it and to establish how success will be measured.

There are many hypothesis frameworks available, here is one of the simplest:

If [we run this test] then [we will expect to see this] because [of this]

Conversion.com’s hypothesis framework is more in-depth and includes the existing data or evidence, key KPIs and estimated test duration:

We know that [there is this problem because of this observation or data].

We believe that [running this test] for [this particular audience] will result in [ expected result/KPI].

We’ll know by testing [this test idea] on [this web page/funnel] and observing [this key KPI] for [length of time].

There is no right or wrong way to write a test hypothesis, so long as you explain the test idea, why you want to test it, and what you expect to see. Hypotheses ensure that the tests you are running have a solid reason for being run, and are not simply subjective ideas with no way of measuring success.

KPIs

Your hypothesis will help you decide what to measure to ensure your test is successful; called KPIs (Key Performance Indicator.) Often referred to as test goals or objectives, the ultimate aim is to increase website conversion, ie increasing number of sales or sale revenue. However, not all tests will want or need to measure this goal. Other KPIs could be:

  • funnel progression,
  • increase (or decrease) in contact form submissions,
  • or user engagement via clicks on elements.

Perhaps you want to increase the length of time users spend watching a video on your website, or get them to engage more with online chatbots. Collecting e-mail addresses or creating new user accounts are also important to the success of a business and can be the main focus of an A/B test instead of conversion. If you can measure it, then it can be a KPI (or goal).

It is important to use your hypothesis to understand what improvement you wish to see, that way you can ensure that the correct tracking for the KPI is in place when the test is running. This tracking cannot usually be added once the test is running or finished.

Prioritising

Once each test in your list has a hypothesis, then you can begin prioritising them. The purpose of prioritising is to ensure that you are running the most relevant tests for the business and getting the best ROI on your tool and people. There are a lot of things to consider when prioritising tests, and it can be helpful to prioritise first, then build a roadmap.

Conversion XL has created an A/B testing prioritisation framework called ‘PXL’ which you might find helpful when beginning to decide which test to run first. It uses a point based system to rate the test idea depending on the design, location on web page & ease of build. They have created a free spreadsheet with examples to help you start ranking your test ideas.

The PXL framework is a great starting point for prioritising, but you should also consider what the estimated impact on business value would be. A test with a high estimated business value should be prioritized above other tests in order to align CRO with the rest of the business. This is also helpful in keeping executives on board with optimisation.

Roadmap

Once tests have been prioritised, it makes creating a roadmap easier. But don’t be fooled, testing roadmaps are complex; there are a lot of things to remember and they can be subject to change based on changing business requirements.

Some important things to remember when building your roadmap:

  • T Shirt sizing — how complex is the test to build? Small, medium or large? Will it need weeks of building and QA-ing before going live?
  • Do you need to collect resources from other teams (designs, copy etc)? They will need time to create & gather these assets for you.
  • Are there other tests running on the same page which could influence the results? It is possible to run multiple tests at the same time, but test clash should be avoided.
  • Will the test coincide with peak trading times? Does the business have a code freeze during peak times, or are the potential gains of testing too alluring?
  • Are there any website releases due which could affect the test? Test code can conflict with engineering code. Make sure your developers are aware of what tests are running and whether any conflicts may occur.
  • Is it time sensitive, ie. does it run alongside marketing campaigns? Again, working closely with other teams ensures that everyone involved knows what is expected of them and when.

Sometimes you may find that a high prioritised test require lots of time to implement, in which case it might be prudent to run smaller, but lower priority tests, whilst the bigger test is built. This way you are still optimising your website without too many delays.

In the case of website releases, you might have to schedule time into your roadmap to pause tests. This will allow engineers and product teams to track the impact of their changes. During this ‘down’ time, you can be building new tests, running workshops and creating the next roadmap.

Typical A/B testing roadmaps run for 3 months. This allows for changing business requirements over the course of a year, but also ensures both large & small tests run. It’s important to try to not change a roadmap once it has been set, otherwise the mid-priority levels tests may never run.

Conclusion

It can be tempting to jump straight in with A/B testing, but these steps will help you get the most out of your testing tool, helping the business make the most money and keeping your sanity when requests keep coming in.

--

--

Elise Maile

UX, Conversion Rate Optimisation and Personalisation specialist.