Are You a Victim of Your Own A/B Test’s Deception?

A/B testing is all the rage, and for good reason. If tweaking your home page a bit can get you 25% more signups, who wouldn’t try it?

The best thing about A/B testing is the awesome selection of tools. Optimizely provides a live editing tool that puts page tweaks and goal tracking in the hands of marketers. Visual Website Optimizer offers a suite of interesting measurement tools, including behavioral targeting, which allows you to show different variations depending on a visitor’s actions.

optimizely visual and website optimizer

Even with such great technology available, there are a few things to watch out for. The first is statistical significance, which has been written about enough (here, here and a mini-site here if you’re interested).

Another is the common mistake of assigning a goal that measures the short-term effect of a test rather than the long-term effect on your business. We made this mistake at Segment.io, and that’s the story I’ll be sharing in this article.

The Winning Variation is Wrong

Usually the goal of an A/B test is to get people to take a single action on a single page. Common actions include clicking the signup button on your home page or joining an email list. Those actions are great vanity metrics, but the fact is that more visits to your signup page or a bigger email list aren’t very sound business goals.

The problem with the single-action approach is that it assumes a single action provides value to your business, which it usually doesn’t. Most A/B tests are done at the top of an acquisition funnel, long before visitors have proven their worth.

The goal of an A/B test should be to move the visitors who are most likely to become high-value customers from the top of your funnel to the bottom of your funnel.

How We Messed This Up

I’ll share a super simple experiment we did at Segment.io that illustrates my point. We recently ran an A/B test on our shiny new home page. Our test was simple: we created two variations of the signup button text. The control version read “Get Started,” and the variation we chose was “Create Free Account.”

Here’s what our A/B test variation choice looked like:

segment.io home page

Before long, “Create Free Account” beat “Get Started” with a 21% increase in conversions. Time to call our developers and make it permanent, right? For most people that would be the next step. But, being an analytics company, we always have an abundance of nerds around ready to dig deeper into our data.

To make analysis easier, we tagged each tested visitor with the variation they were shown. And, since Segment.io automatically sends Optimizely variations through to KISSmetrics, Customer.io and Intercom, we were able to segment out visitors who saw each variation in all of our tools.

How We Found the Real Winner

First, we looked at the immediate “next step” for visitors after they clicked on the call to action. KISSmetrics funnels were our tool of choice for this analysis. We used a simple funnel of Viewed Home Page > Viewed Signup Page > Signed Up, and split out the funnel by the “Experiment home page CTA” property. Half of the visitors at the top of the funnel had the value “Get Started” and the other half were tagged as “Create Free Account.”

kissmetrics conversion funnel report

It turned out that the visitors who clicked “Create Free Account” were less likely to complete the signup form. This drop in signups effectively wiped away the 21% gain that button made in our A/B test on the home page. That meant there no longer was a clear winner between “Create Free Account” and “Get Started.”

But there was one last thing to examine: the pricing plan people ultimately chose. It turned out that the visitors who clicked on “Create Free Account” were much less likely to sign up for our paid plans. Those who clicked “Get Started” were much more likely to sign up for paid accounts.

So, in the end, the real winner for us was “Get Started.”

How to Avoid This in Your Business

Watch all of your results! Be especially wary of optimizing for a single click or action. Remember, a single click usually does not provide direct value to your business. Long-term gains are always more important than short-term conversion wins.

Don’t decide an A/B test based on an increase in clicks, opt-ins or signups. Tag visitors with the A/B test version they saw and watch out for unintended consequences of the tests you run. A full page opt-in form might lead to a bigger email list, but what if it decays the value of your user base?

Here’s a checklist to help you find the real winner in your A/B tests:

  1. Save test variations to user profiles with a tool like KISSmetrics.
  2. Watch the effect of each test variation all the way through your acquisition funnel.
  3. Look for unintended consequences of tests, like poor user engagement or lower referral rates.
  4. After a few months have passed, check the lifetime value and churn rate of users for each variation.

If you have questions about how to set up any of this, I’ll be watching the comments on this post.

About the Author: Jake Peterson leads customer success at Segment.io, helping thousands of customers choose and set up analytics and marketing tools. If you’re looking for free advice, check out their Analytics Academy. Segment.io is a single, simple integration that gives you access to 70+ analytics and marketing tools with the flick of a switch. Check it out here.

  1. This is one of the dangers in marketing. If we see one test/strategy giving us positive increases, we are immediately elated without digging further. We change things up only to find out that we are only scratching the surface. Sometimes, we need to look beyond the numbers or the stats. Just like what you did, we need to step back, get help and look at the big picture more thoroughly.

  2. Wouldn’t the simple answer here be to set the goal to views of the thank you page after sign up? And not just who clicked the sign up button?

    • Yup Darcy that makes sense. My point here was to get people thinking about more than just the initial click.

      For a lot of our A/B tests we also go even deeper to look at the effect on a user’s plan level and revenue based on which variation they saw :)

  3. A/B tests ,its good to know about this and post shared by you is very useful its good to know that how to avoid it in our business and how to find the real winner.

  4. Before reading your article I have no idea about what is AB test .It is a new term for me. But your Article is fantastic.

  5. A nice article, it’s always good to remind people of the bigger context when doing any kind of tests!

    Your example, however, is lacking somewhat, since in the end the difference in the % of sign ups – 4.4% vs 4.5% is not significant (~67%, which is in fact just noise, not data).

    I wonder why didn’t you split the “Sign Up” into two, or, in fact, report only paid signups in that “Sign Ups” column (that’s what I thought you were reporting when I looked at the graph initially). I believe it would have presented a much clearer picture to anybody new to A/B testing and statistics who is looking to gain some knowledge from the article. The reason I bring this up is that all too often I would see people jump to conclusions, thinking they are acting on data when in fact it is simply noise.

16 comments

Please use your real name when commenting. Otherwise, your comment may be deleted.

← Previous ArticleNext Article →
Buffer