Tips and Tricks for Coding CRO Experiments

Elise Maile
7 min readAug 3, 2020

This article was first published at www.conversion-uplift.co.uk

A Guide for client-side front end developers on coding experiments.

I’m a self-taught developer since the age of 15. I’ve spent 10 years of my career working as a freelance web designer and developer. I initially fell into Conversion Rate Optimisation and A/B testing by developing experiments, and made Javascript my bitch.

But, when I started coding experiments, I found it to be an unusual mix of the familiar and the new. I wanted to share some of the quirks about coding for A/B tests today.

Disposable code

The first thing I realised about coding experiments was that whatever code I wrote, it was likely to be thrown away after the test ended. There are a couple reasons for this:

  • The experiment can (and often will) fail. A failed test is still a learning experience, but as a developer, it can be disheartening to realise that the complex trigger you spent a day building is irrelevant.
  • If building a client-side experiment in a third party tool, the code you’re writing is manipulating the DOM and so unlikely to be useful for the engineers. Manipulation involves using Javascript to modify the page, which is just not how websites are built in the source. I’ve had my code called ‘hacky’ which hurt, but was technically true — I’m hacking the DOM to make a change.

It can be difficult to accept that all that time spent struggling to target a certain element, or getting the experiment to trigger at the right moment, was a waste of effort. But the whole point of experimentation is to be a learning experience; even the failures teach us something, and your disposable code is vital to that process.

Your code is disposable. (Photo by Steve Johnson on Unsplash)

Limitations

There are always going to be limitations when building an experiment, but there are plenty of workarounds to get the required results. Firstly, don’t use the WYSIWYG (what you see is what you get) tool provided by third party tools. It’s usually rubbish and inserts untidy and unreliable JQuery. 99% of experimentation tools have a developer mode which allows you to insert CSS and Javascript written by you, the expert.

Secondly, if you’re running client-side testing (as opposed to server-side testing) then you will be limited to modifying what is already present in the DOM upon load. You will have to learn to work within this limitation, for example, creating new blocks of HTML or editing copy or layouts.

However, if a change is to be generated dynamically, then that content needs to be exposed via a data layer. For example, moving pricing or availability of a product to an earlier stage in a funnel. Without a data layer exposing this information on the page it’s required, you won’t be able to build the experience. Work with your engineers to ensure a data layer is functional for testing. Ideally this should be done early on when installing your experimentation tool, so that dynamic content can be pulled throughout the site.

Targeting and triggering

URL targeting in Google Optimize

Making sure the experiment fires at the right time is likely to be the hardest part of a test build. There are multiple moving parts to consider, but the most common is to target the URL. This can be precise using “equals” or “exact” which will not allow parameters or trailing slashes. Alternatively, you can target URL “contains” or “substring” which targets a URL based on whether it contains a set of characters that can be present anywhere in the URL. This is useful for targeting pages based on parameters (ie referral traffic from social or email.) Some tools give you the option to use regex to target URLs as well.

You are not restricted to URL for targeting though. Sometimes you may need to use information available in the data layer to ensure your experience is firing for specific products only (relevant if URL structure is unrelated to product type.) Or you may need to search for a unique element on the page, whether a cookie is or is not present (based on previous exposure to an experiment for example) and, of course, audience segments (new vs returning, referral, device type etc.)

Tests can also be triggered based on a user’s action — (e.g. clicking on an element, or having a certain threshold of items in their basket for example.) Again, most testing tools offer a way to ensure these triggers are met, sometimes this is via audience segmentation/audience behaviour selectors, or a custom Javascript trigger section. Check your documentation for the solution offered.

QA

You cannot QA your experiment enough. Make sure you test your experiment throughout the build. Check it on the devices it is designed to fire on, try going forward and backward through the user journey. Run through the whole user journey, try jumping around to unexpected pages — not all users follow the funnel laid out for them. And make sure you have a peer QA it as well.

Use Ghostry, or an alternative third party tracker tool. What I love about it is that if something looks broken on a website, but I’m not sure if it’s an A/B test causing the problem, I can use Ghostry to simply stop the testing tool from loading. If the bug still appears, then it’s not the experiment. I recommend this browser add-on to all product owners, engineers and CRO specialists as it prevents A/B tests getting the blame for every UI problem that appears on a website, when 9 times out of 10, it’s not the experiment that’s the problem.

But, a word of warning. Sometimes there will be something called an ‘edge case bug’ where the experiment does not fire when it should, or maybe it looks a little crooked going from one screen size to another. It is vital to weigh up the effort of fixing that bug vs the chance that a user might see it.

This is where the analytics team are your friends. Get them to check the number of users on that screen size, or how many go from page A to page H and back again. If the impact on users will be very low, but the effort to fix the bug high, it’s often best to simply run the test. Remember, it is temporary and will be turned off and your code disposed of in the space of a few weeks, the learnings will be greater than the risk of an edge case bug.

Remember to QA your experiments. (Photo by Cookie the Pom on Unsplash)

Goals

Goals / KPIs should have been established early within the experiment hypothesis. Do not forget to add them, and confirm that the tracking is working either via the testing tool itself or the analytics solution. Adding the goals is part of building a test, so it should be part of your checklist, but if you think something should be tracked that has not been included in the spec, ask, because once a test is live, data tracking cannot be retrofitted.

Goals are usually a form of event tracking, such as clicks on elements or visits to certain pages. Depending on the tool being used, you may be able to use the WYSIWYG to select the element to track, or you may need to code this in directly, or even use a tool such as Google Tag Manager to create the event tag & trigger. Check your experimentation tool documentation for instructions.

element.addEventListener(“click”, fnClickTrack);function fnClickTrack() {
console.log (“Add the tracking code for your tool here”);
}

Goals can also be revenue related such as average order value, or engagement rates like bounce or session time.

Miscellaneous

If your company isn’t set up for server-side testing, you can still use experimentation tools to switch features that have been coded in the main code base on and off. This is great for testing new features, larger changes such as element redesigns or if the experimentation tool itself is clunky for a developer to use (Google Optimize I’m looking at you.) Expose both control and the variation code to the DOM, then use CSS or Javascript to show & hide it based on whether the user is bucketed into the variant or not.

if(window.lp__test) {
console.log(‘Experiment loaded - activate’);
lp__test.init();
} else {
console.log(‘Experiment not loaded yet - sett flag’);
lp__experiment_flag_test = true;
}

Saying all that, using an experimentation tool to test a page redesign is not the best use of A/B testing because it is difficult to ascertain which element caused the impact. You should encourage CRO specialists to use qualitative measures, such as user testing to validate design decisions.

Create a code library:

Finally, if you spend any time developing experiments, you’ll quickly realise the amount of copy/paste you’ll do. I highly recommend keeping a library of code snippets for quick access for commonly used commands. For example, onClick tracking, setting and retrieving cookies, and checking URL parameters contains. These are snippets I use enough to not know them off by heart, but to find them easier to copy/paste from my code library. You can use Github to create a code snippet library, or any note taking tool that suits you.

Conclusion

Whether you’re new to coding experiments or an experienced optimisation developer, I hope these tips were useful. There are a lot of points to consider when coding CRO experiments, and it can be frustrating when someone asks you to “just” make this change when they don’t understand the complexities involved in manipulating the DOM and QA-ing the work involved. Remember to communicate clearly with your team, especially when blockers occur in the experiment coding process, and use these tips to know when to push back and ask for clarification about the work you’ve been asked to do.

--

--

Elise Maile

UX, Conversion Rate Optimisation and Personalisation specialist.