Email is one of the most powerful tools you have when communicating with prospective customers. The problem is, it’s very difficult to know how the design of your email might impact customer behaviour. Will a larger “Buy Now” button increase conversions or should you try it in a different colour? How does the subject of the email influence open rates? A well-designed email campaign can have a significant impact on your bottom line, so how do you know that you’re using the best possible design? You test!
An A/B test is much like a scientific experiment. It is randomised, and uses a control and a treatment to collect statistics about customer behaviour. Let’s look at that statement in greater detail before we go any further. The control is one version of an email. This uses your company’s normal design. Nothing is changed from what you’d send your contacts if this wasn’t a test. We’ll call this version ‘A’. The treatment is identical to the control except for one element that you change. You can change the subject, the font, or the call-to-action. We’ll go into more detail on the kinds of things you can change later on. We’ll call this version ‘B’. When you run the test, both emails are sent to your list, but they are randomly split so that half the list receives version A and the other half receives version B. For example, I might want to send an email to 2000 subscribers to try and generate sales through my e-commerce website. I design an email (the control) and then create another email with one element changed – the call-to-action (CTA).
I can then monitor which email has a higher success rate (the one that generates more sales in this instance) by tracking the use of the two promotional codes. The more effective CTA will then be used in all future sales emails.
The most important part of an A/B test is that it has a defined, measurable outcome. You need to know exactly what you are testing. Here are a few ideas for things you can test:
Testing can only produce valuable results when you test repeatedly. A once-off test might provide some interesting results, but you need to run multiple tests to make sure that you are getting real insight into customer behaviour. Each test should suggest new avenues to explore, and stimulate subsequent rounds of testing. The following steps outline a testing protocol that should help you to get the most out of your A/B testing:
An A/B test campaign can only be successful if you have clearly defined a measurable outcome before you start testing. Some examples of measurable outcomes are:
As we mentioned in the previous section, it is important that you set an appropriate volume for your test. If you don’t send the test to enough people you may not achieve statistical confidence. Statistical confidence determines whether or not the results of your test are significant. If you don’t send your tests to enough people, you might give too much weight to actions that don’t reflect your whole audience. There are online calculators to help you work out whether not your results are statistically significant. You may also find that the results of your A/B split tests are unintuitive. You’re testing for the variation that makes people click on your CTA, this could be a bright yellow button on a blue page. It may not be pretty, but you’re not testing for aesthetics, you’re testing for conversions. Don’t dismiss the results because they contradict your expectations.
Running an A/B split test is a complex activity, these tips should help you keep your ducks in a row:
Make sure that each person is always offered the same promotion. If you offer me 15% off today and 10% off tomorrow I may get upset, especially if I was expecting 15% off.
Everlytic has designed a dedicated A/B Split Testing tool to help you to create beautiful, effective testing campaigns. First things first, create a list of contacts who will receive the test emails. Give this list an obvious name so that it is easy to find in the list selection step. Once you’ve done that, click on the campaign icon (the paper aeroplane) in the left navigation, and click Create Campaign. This will take you to the campaign type selection screen. Click Select on the A/B Split Testing card.
In the next step, fill in the following properties for your campaign:
All of these fields are compulsory. You can insert personalisation tags in these properties fields. Just click the Personalise button that appears when you hover your mouse over the field and choose the personalisation field you want to include from the pop-up. Click Continue to move to the next step.
Search for the list you set up for this test and check its checkbox. You can segment the list with new or existing filters. When you are ready to move on, click Continue.
Here you can create the individual emails to use in your campaign. Click on Compose under the card for each email in the split test and follow our normal email composition steps.
Once you’ve composed your two emails, click Continue.
The final page in the split test campaign creation is the Campaign Confirmation page. Here you can review all the settings you’ve chosen for this campaign to make sure you’ve got everything just right. Once you are happy, click Send to get the campaign on the road.