2017-03

How to get stuck into A/B testing today

by Melle Staelenberg
 | 
31 March 2017
How to get stuck into A/B testing today
If your emails aren’t delivering the results you want, what can you do? Step up your A/B testing strategy.

Email marketing is simple, right? Create a great email, send it to your list of subscribers, and wait for those clicks and conversions tocome rolling in.

If only it were that easy.

The reality is that email marketing success doesn’t happen without a little blood, sweat, and insight.

While your gut may tell you you’ve done everything right, you really shouldn’t rely on it. What you should rely on is good, hard data. Data you can obtain through A/B email testing.

While the basis of the technique is simple – produce two versions of an email, send them out to different audience groups, then look at which performs best – knowing what to test, how to set your sample,and what to do with your results is cloudy at best.

Here’s what you should be doing.

Automate the process

Implementing email automation in business is a big step towards improving effectiveness and efficiency, so if you don’t already use automated email marketing software such as dotmailer, this is the recommended first step. Software like this helps you better strategise, implement, and interpret your efforts to drive up results.

Decide on your test variables

For an email to perform well, there are multiple elements you need to manage – from copy and design to send time. All need to work on their own, and together, for your campaign to succeed. A/B testing can identify strong and weak spots of your campaigns, but to get data to boost your efforts, you need clarity on what you’re testing and why.

Before you decide what to test, review your historical stats – open rates (OR), click-through rates (CTR), and conversions. Identify where you are performing well and where there is scope for improvement. Understanding this information will help you identify which variable to test first.

Consider:

  • Send variation A to 50% of your contacts and variation B to the other 50%. See which works, then use your findings to inform subsequent campaigns.
  • Send variation A and variation B to smaller test groups, say 15% and 15%. See which works best, then send the winning version to the remaining 70%.
 

Base your choice on the size of your total subscriber list. If your list is less than 1000, 50/50 is the best way to go. Why? Because 15% of 250 is only 37.5 people – not enough to produce worthwhile insights. With a larger list you can go for the second option because the numbers will still be significant. For example, if you have 7000 contacts, 15% is 1050.

For both options, ensure your sample is selected randomly. Your email system should have an A/B testing facility to help you do this.

Email marketing and fashion ebook

Set your test time

Another factor you need to get right is your test run time. How long should you wait until you cap the results? Ideally, you want to run it for as long as possible (preferably a week) – because sometimes the results at the beginning aren't accurate – but this isn't always practical.

If you’re testing smaller groups and awaiting results to send out a time-critical winning email, an earlier cull might be needed. Emails generally have a critical period when the majority of people who will open it do so. Look at your existing stats and see when this is. If, historically, more than 50% of your contacts open within 48 hours, it would make sense to stop at this point.

Analyse your results

Once you’ve run your test and have your stats, it’s time to analyse the results and pull out those all-important insights. There are three main variables to consider:

  • Opens by delivered (OR).
  • Clicks-throughs by delivered (CTR).
  • Clicks by opens (CTO).
 

Your focus will depend on your chosen variable for that specific test. If you changed an inbox element, you should pay attention to your open rates. If you changed an element in the email itself, it’s best to look at CTR or, better still, CTO as this combines both open and click measurements.

Be sure the difference is statistically significant – for example, a 20% difference not a 2% difference – before making a call. Automated software will do this for you.

The next steps

Never rely on just one set of data. Just because one approach worked over another doesn’t mean it will work next time. Make sure you test each variable multiple times across different campaigns over the span of a month, six months, or even a year.

Also, don’t stop at your opens and clicks. Start paying attention to what happens when your subscribers arrive on your landing page. This is where the all-important conversions and sales happen. So if people are enticed by your email but aren’t converting, what’s going wrong? The further down the buying funnel you A/B test, and understand, the better.

Interested in finding out more about our email services? Get a quote today or call us on 1300 725 628.

Page 1 of 2
  < 1 - 2  > 
About the author
Melle Staelenberg
Business Manager

Melle has been with Salmat since 2010. In his current role, Melle is responsible to maximise business value from Salmat’s core products, including email and mobile. His product expertise combined with a strong focus on research and a healthy dose of creativity helps driving Salmat’s digital roadmap. Melle holds a Masters degree in Business Economics and a BA in New Media & Digital Culture and is passionate about all things digital. Being Dutch, he likes to cycle – he’s also a big Ajax fan (the football team, not the cleaning product)!

More articles by Melle Staelenberg