A/B testing during peaks — yay or nay?

Elise Maile
3 min readSep 10, 2019



Every September and October, e-commerce websites gird their loins for the annual flurry of online sales. The end of November sees Black Friday/Cyber Monday; Christmas draws closer, quickly followed by the Boxing day sales.

It seems that peak trading doesn’t seem to slow for many companies until mid-January.

As a result of this vital money-making period, many businesses operate a code freeze during this period: the website is locked down to prevent any accidental bugs affecting revenue.

If you work in CRO you will have experienced the increase in requests that appear during this time: Something is broken, please can you run a test at 100% to help.

I call these “Fixes” and label them as such (on the few occasions I agree to do them.)

But, with profit as the primary focus and concerns around performance, what role should actual A/B Testing play during these times?

Risk vs Reward

A/B testing is a low risk method of validating ideas, but when a large portion of your revenue is generated during a short window, that low risk is still a risk. And there will be people that argue that any risk is too great.

Yet, the rewards could also be great.

Imagine running a test and seeing a big increase in conversion.

Imagine increasing traffic to that variant throughout peak. How much more revenue could you make?

If you have followed a prioritisation framework, you’ll already have hypotheses you are more confident about. Couple that with calculating the required sample size and you’ll show a test to the minimum required number of customers (eg 20% traffic) and mitigate the risks to the best of your ability.

Traffic & Behaviour

More traffic during peaks usually equals faster results, and who doesn’t want to reach statistical significance in half the time?

But before hitting “publish” you should consider the differences in customer behaviour compared to the rest of the year: If there is a time limit (delivery date cut-off or one-day only sale) then they are unlikely to be browsing in the same manner as usual.

Last year (2018), Black Friday online sales grew again, but post-peaks, retailers were hit with £1.6bn worth of returns. Any test results you gather during peaks should be taken with a pinch of salt, even if it reaches statistical significance. You don’t know what drove your customers to convert — was it the item itself, or the time limit or price, or the UI you’re testing? And if your returns also peak, then all those conversions meant very little.

Ideally, I’d recommend (re-)running the test outside of peak, to validate the results when behaviour is more stable. You may discover that a particular experience only brings success during peak periods, in which case you can plan to re-run it during the next rush in trading.

There is one caveat: If you experience multiple peaks throughout the year (something the travel industry already experiences, and is becoming more common in fashion too) then testing during “peaks” is going to benefit your company year round since customer behaviour is unlikely to vary significantly.


Don’t use testing tools to simply “fix” the website during a code freeze.

Mitigate risk by prioritising high-confidence tests and adjusting sample size.

Essentially, test all year round. Peaks included.



Elise Maile

UX, Conversion Rate Optimisation and Personalisation specialist.