A Beginner’s Guide to A/B Testing: Email Campaigns That Convert

Email campaigns and newsletters can be a great way to get repeat business, as well as new customers. You’re already working with a somewhat pre-qualified base: these people have said they want to receive information from you. And a lot of them have likely already done business with you. And we all know it’s easier and cheaper to retain customers than it is to get new ones.

This is why it’s vital to run A/B tests when trying out new techniques or formats for your email campaigns. Improving conversion rates here can make a bigger difference in your bottom line than many other marketing efforts, especially those of similar cost.

Here’s the third installment in our A Beginner’s Guide to A/B Testing series. Be sure to check out our previous posts: A Beginner’s Guide To A/B Testing: An Introduction and A Beginner’s Guide To A/B Testing: Exceptional Web Copy, and stay tuned for our next installments on testing pay-per-click and SEO landing pages.

Decide What You’ll Test

The first step in setting up an effective A/B test is to decide what you’ll test. While you may want to test more than one thing, it’s important to only test one thing at a time to get accurate results. Things you might consider testing include:

  • Call to action (Example: “Buy Now!” vs. “See Plans & Pricing”)
  • Subject line (Example: “Product XYZ on Sale” vs. “Discounts on Product XYZ”)
  • Testimonials to include (or whether to include them at all)
  • The layout of the message (Example: single column vs. two column, or different placement for different elements)
  • Personalization (Example: “Mr. Smith” vs. “Joe”)
  • Body text
  • Headline
  • Closing text
  • Images
  • The specific offer (Example: “Save 20%” vs. “Get free shipping”)

Each of those things is likely to have an effect on different parts of the conversion process. For example, your call to action is obviously going to have a direct affect on how many people buy your product or click through to your landing page. Your subject line, on the other hand, will directly effect how many people open your email in the first place.

Think about this when you’re deciding which things to test first. If not many people are opening your emails, then you’ll likely want to start with your subject line. You’ll likely want to test the more important parts first. Your headline and call to action will likely have a great impact on conversions than the images you use or your body text. Test those things first, and then test the others in greatest to least importance.

Test Your Whole List, Or Just Part?

In the vast majority of cases, you’ll want to test your entire list. You want to get an accurate picture of how your email opt-in list responds to your new email campaign, and the best way to do that is to test all of them. There are a few instances, though, where you might not want to test your entire list:

  • If you have a very large list, and the service you’re using to A/B test charges by the email address. In this case, test the largest sample you can afford, and make sure that the names you select are chosen randomly for accurate results.
  • If you’re trying something really extreme, you might want to limit how many people potentially see it, just in case it goes over terribly. In this case, it’s still a good idea to make sure that at least a few hundred people are seeing each version you’re testing. If you can test a few thousand people, even better.
  • If you’re running a limited-time offer, and want to get as many conversions as possible, you might want to run a small (a few hundred recipients) test batch first, and then send out the winner to your entire list.

The larger your test sample, the more accurate your results will be. Make sure that the split is done randomly, too. Hand-picking recipients (or even using two lists from different sources) is a great way to skew your results. The goal here is to gather empirical data to figure out which version of your A/B test material really works best.

What Does Success Mean?

Before you send out your email versions, it’s important to decide what you’ll be testing for and what you consider success. First, look at your previous results. If you’ve been using the same email campaign style for months or years, then you’ll have a good pool of data to pull from. If your historic conversion rate is 10%, then you might want to increase that to 15% to start with.

Of course, maybe your goal with the initial A/B test is just to get more people to open the email. In that case, look at your historical open rate, and then decide how much improvement you want to see. If you don’t see that improvement with the first set of A/B tests, you might want to run another test, with two more versions.

Tools For Testing

Most email campaign software has built-in tools for A/B testing. Campaign Monitor and MailChimp both have such tools built in, as does Active Campaign.

If your email campaign software doesn’t have specific support for A/B campaigns, you can set one up manually. Simply split your current list into two separate lists, and then send one version of your email campaign to one list and the other to the other list. You’ll then need to compare results manually, though exporting your data to a spreadsheet can help with this.

Analyze the Results

Once you’ve run your email campaign with the two different email versions, it’s time to take a look at the results. There are a few different categories of results you’ll want to look at:

  • The open rate
  • The click-through rate
  • The conversion rate once they’re on your website

The reasons behind tracking the first two are pretty obvious. But a lot of people might wonder why we’d want to track the conversion rate outside of the email. Wouldn’t that be beyond the control of the email itself?

Yes and no. Ideally, the email you send shouldn’t have much to do with the conversion rate once a visitor is on your website. If one email leads to 10% of readers clicking through to your website and another one leads to 15%, then the second email should result in 50% more conversions than the first one. But that doesn’t always happen.

It’s important that the message you give in your email is consistent with the message on your website. If you’re promising your visitors a special deal, and that deal isn’t perfectly apparent on your website, then you’re going to lose customers. The same can happen if your email doesn’t echo the look and feel of your website. Visitors might get confused, and wonder if they’ve landed on the correct page.

Make sure you track your conversion rate from each email version to ensure that you aren’t losing sales. The end goal here is conversions, not just click-throughs. You may find that one email gets more click-throughs than the other, but that it doesn’t result in as many conversions. In that case, you’ll probably want to do more testing to see if you can get an email that not only results in higher click-throughs, but also higher conversions. Tools like those from KISSmetrics can help you track these on-site results.

Best Practices

Here are a few best practices to keep in mind when running an email A/B test:

  • Always test simultaneously to reduce the chance your results will be skewed by time-based factors.
  • Test as large a sample as you can for more accurate results.
  • Listen the empirical data collected, not your gut instinct.
  • Use the tools available to you for quicker and easier A/B testing.
  • Test early and test often for the best results.
  • Only test one variable at a time for best results. (If you want to test more than one, look into multivariate testing instead of A/B testing.)

Be sure to check out the other posts in this series, too: A Beginner’s Guide to A/B Testing: An Introduction and A Beginner’s Guide to A/B Testing: Exceptional Web Copy. And we have two more posts in this series coming up, covering pay-per-click ad testing and SEO landing page testing.

About the Author: Cameron Chapman is a freelance designer, blogger, and the author of Internet Famous: A Practical Guide to Becoming an Online Celebrity.

  1. Another thing to test for is conversion rates for text vs html emails. Does the common belief that html mails generate more conversions hold true for your list as well?

    • Yes, that’s another great thing to test out. For some industries where people aren’t as computer savvy, it’s necessary to provide text versions while others it doesn’t really matter.

  2. I’m a little surprised by the advice to A/B test on your whole list. My business has received advice to hold back the bulk of our email send until after the results of an initial A/B test sample have revealed a clear winner.
    In practical terms, we send version A to ~15% of our list; version B to ~15% of the list; and hold on to the remaining 70% of the list.
    When your email list is large enough (upwards of 10000), a 30% test send should be enough to hit the point of statistical significance – that is, you could keep testing but the trends have established themselves and any further testing is unlikely to affect the trending results in any meaningful way.
    By doing this kind of split send, it means we have a chance to send the most effective subject/artwork/call to action/etc to approximately 85% of our email database – not the 50/50 split in a straight A/B test.

  3. I think email based A/B testing is hard as your open rates (unless you do subject line testing) vary from segment to segment. One segment might have a much higher open rate which makes the content test bias. Helpful guide though !

    • It’s isn’t as easy that’s true, but the suggested variations above are helpful. Just play around with is and see what happens.

  4. What amount of difference is considered statistically significant?

    For instance, in a recent email where we changed ‘from’, from a generic mailbox to a real name, we got the following results:

    Subject Line Want to boost efficiency, value, and ROI?

    Sent 14,673 14,673
    Delivered 99.22% 99.15%
    Unsubscribed 0.07% 0.12%
    Total Opens 21.29% 23.73%
    Unique Opens 13.34% 15.17%
    Total Clicks 6.07% 6.57%
    Unique Clicks 4.44% 5.33%
    Forwards 4.23% 4.09%

    The second column represents the ‘real name’ email, the first, the generic mailbox. Is the 2.4% open rate significant? Is a single test of this method a solid basis for trends? If not, how many repeats would be needed to solidify results?

    • I am assuming you mean 24 percent? With that volume it is statically significant. I wouldn’t base all of your decisions off of one test, but I would start making assumptions based off of that data and start preforming more tests.

  5. Thanks for the article.

    What I was really trying to find out is if someone makes a standalone A/B testing tool that I can integrate into my email chain.

  6. I am looking to test email providers. I would like to send to a group of 2000. Should I A/B test the email provider with the same contacts or different contacts?

Comments are closed.

← Previous ArticleNext Article →