So you’ve decided to jump into A/B testing.
And it sounds like the salvation you’ve been dreaming of. What’s not to like? Being able to test two different pages on your site to see which one gives you the most customers sounds amazing. After running these things a few times, those extra customers sure will add up!
You get everything set up and…
…realize you don’t know what to test first. And the A/B testing euphoria starts to wear off.
Right before you’re about to give up, you remember an argument you had with your designer (or was it the boss? Or the developer?) about the button color on one of your pages. They decided to go with green but you KNOW that orange is the better option. You’re so confident in your choice that you feel it in your BONES.
You decide to settle the matter once and for all.
The current button on your site looks like this:
And you test it against this stud of a button:
Everything’s set up, the test is running, it’s only a matter of time.
A week or two later, you log back into the account for your A/B testing tool, go to the test, scroll straight to the results…
The green button out-performed the orange by 0.1%. What a letdown. You don’t know what’s worse:
- The orange button didn’t perform better than the green like you predicted
- The difference was so minor that it wasn’t worth worrying about in the first place.
This happens ALL the time. Especially for people just getting started with A/B testing. You see, randomly picking elements to test produces lackluster results. The needle doesn’t move at all.
Right now, there are several big-win tests you could run that would make a HUGE difference to the growth of your company. Like 5-20% in customer growth kind of huge. These optimizations are just sitting there, waiting for you to start running A/B tests.
Finding big-wins doesn’t happen by accident.
Here’s the thing: if you A/B test all sorts of random stuff, you’ll never find these big-wins.
To find them, we’ll need a completely different process for deciding which tests to run. There’s a time and a place for testing everything we can think of (I’ll get to this in a second) but the big-wins require a completely different approach to testing.
Stage 1: Finding the Big Wins
If you’ve never tested before, you’ll find several 5-20% increases to your bottom line. I’m not talking about increases to some random conversion rate. This is an increase to your revenue and customer base. Finding a few 10% improvements to your revenue will take your business to a completely new level.
The best part? These are usually permanent increases to your customer growth. Make a single change to your business and reap the rewards for years to come.
But randomly testing all sorts of stuff on your site won’t find these big wins for you.
We need some guidance on where to start looking.
If you’re not an optimization pro that throws out A/B tests like candy, this is the process you want to use in order to get moving quickly.
To find the big wins with A/B testing, follow these steps:
- Get qualitative insights (customer feedback)
- Predict how to improve
- Confirm the prediction with an A/B test
Let’s work through each of them:
1. Get Qualitative Insights
Qualitative data does a great job with alerting us to problems. More importantly, it helps us learn the WHY behind the WHAT. Using analytics, you’ll see where you customers bail, which features they use, and who your most profitable customers are. But to understand why your customers are doing what they’re doing, you need to go talk to them. At Kissmetrics, here are our 5 favorite ways to get customer feedback:
- Feedback Boxes
- Reach Out Directly
- User Activity From Your Analytics
- Usability Tests
To find the big-win optimizations, we want to continue to look for trends in the feedback we’re receiving while diving deeper into issues we think are stirring up trouble.
Let’s say you’ve been looking at your funnel and you notice that only a few people upgrade to a paid plan or purchase your product. You have a TON of people clicking on your offers but as soon as they see the price, they bail.
This is where we want to get surgical with our qualitative data.
Throw up a one-question survey on the purchase page asking people if they have any questions about the product. You could also include a support button to collect feedback. And reach out to customers that HAVE purchased and ask them why they chose to become a customer. Once you’ve gotten feedback from 15-20 people, I bet you’ll be able to find a trend in the responses. Maybe you’ve oversold your offer in your marketing. Or maybe you haven’t addressed a critical objection in your copy.
Here’s the main take-away: qualitative data helps us understand which elements will have the biggest impact when running A/B tests.
Set up you customer feedback systems so you can easily spot emerging trends. And when one pops up, dive deeper so you know exactly what’s going on.
2. Predict How to Improve
This step is pretty straight-forward. You’ve already collected a bunch of qualitative data on how your customers are behaving. And you know WHY they’re behaving that way.
So it’s time to brainstorm some solutions to your problem.
Remember, this is a “prediction.” It’s just a hypothesis. It might work, it might not. And we won’t know until we get data to back it up.
3. Confirm Your Prediction With an A/B Test
Notice how the actual test comes at the END of this process, not at the beginning? By using qualitative data to help us understand what changes could be the most important, we’re setting ourselves up to find big wins with these tests.
Now it’s just a matter of testing to see if you’re right. You need to confirm that people will BEHAVE the same way they SAY they will (usually, they don’t). So get your hands on some data and run that A/B test.
Focus on finding your big wins and solving the major problems that you find from customer feedback. You can then make a huge impact on the growth of your business with a small amount of work.
But these big wins will run out. Before long, you’ll find the ideal funnel to acquire customers, the best pricing structure, and the most persuasive messaging. And if want to keep going, you’ll have to jump into Stage 2.
Stage 2: Get Methodical and Chase the Small Wins
Most improvements from A/B testing are small wins. Each one doesn’t amount to much. But if you can find dozens or hundreds of these things, you can double your growth several times.
While it might be easy to find a small win here and there, it’s not easy to crank these out week after week. You’re going to have to commit a lot of resources to this process.
Ideally, you’ll have a team of several people that can handle the marketing, development, and design of everything. If your conversion optimization team has to constantly fight for help from the engineering or design teams, they’ll never move fast enough to make a significant impact.
Give them the resources they need to test rapidly. Speed is the name of the game.
Also, your company needs to have enough data to work with. Typically, this means you need to be acquiring hundreds of new customers every month (thousands is even better). If you’re a smaller company, you could assign a single person to this task. Just remember that it’ll take a lot longer before you find enough small wins to make a difference.
Even during Stage 2, we’re not testing stuff randomly. Instead, we’re testing EVERYTHING. Brainstorm a list of changes and start running them back-to-back. Your testing pace should be relentless.
Some Joe Schmoe blogger swears that a photo of a person looking at your headline gives great results? TEST IT. Your best friend at the hottest startup in town says a video on the homepage makes money rain from the sky? TEST IT. You just found a list of 50 elements everyone should test from an A/B testing company? TEST IT ALL.
To decide which tests to run first, throw them into a list. It really doesn’t matter which order they go in, start at the top and work your way down as fast as you can. Some will make a difference, most won’t. And there’s really no way to know beforehand. So pick fast and get moving.
Here’s a list of tests to get you started:
- Remove or add steps to your funnel
- Social proof (testimonials, customer logos, etc.)
- Calls to action
- Long-form vs. short-form
- Add and remove elements from a page
- Pricing (changing your pricing structure usually gives you a big win but smaller tests like $37 vs. $39 can also help you grow)
- Purchase/signup bonuses
- Up-sells and cross-sells
The Common Pitfalls
Many people also run into a number of pitfalls when they start A/B testing. Here’s how to dodge them:
Get Statistical Significance
When you start getting results for your test, the data is completely random. It might LOOK like version A does better but in the long run, version B is the real winner.
Flipping a coin works the same way. It’s entirely possible to get four heads in a row. But that doesn’t mean you’ll always get heads. The 50/50 split only shows up if you do hundreds and thousands of flips. Even then, it can get slanted one way or the other.
We minimize (but we can never eliminate) the odds of getting a bad result by collecting lots data.
So how do we know that we have enough data?
Statistical significance measures how confident we are that we’ll get the same results if we repeat the test. And we measure that confidence with a percentage.
If we say that we have a 90% confidence level, that means we’ll get the same results 9 out of 10 times. At 99% confidence, 99 out of 100 tests will give us the same results. Getting more data will steadily increase your confidence level and help you hit statistical significance.
Most people say they have statistical significance when they hit the 95% confidence level. That’s when they pick the winner and move to the next test. Don’t worry about trying to get to 99% confidence. It usually takes too long to get enough data. You’ll grow your business faster by picking the winner and focusing on the next A/B test as soon as you hit the 95% mark.
But remember, the 95% mark is arbitrary. So if you’ve got a test that’s sitting at 87% or 93% confidence and you have other tests in the pipeline, it’s okay to pick a winner and move on. Balance speed with data and don’t sacrifice one for the other.
Visual Website Optimizer has built an Excel sheet that does all the fancy math for you, you can download it here. Just plug in the results from your split test and it’ll tell you if you’ve hit statistical significance. If you haven’t, keep your test running until you do.
The “New” Effect
Introducing something new to your site can impact your conversion rate just because it’s new. This is the “new” effect. What this means is that the new version could out-perform your old version initially. But over time, the performance difference can shrink or even flip. The new version might perform better this week but over the long term, your old version might be the best choice.
And in some cases, “new” will negatively impact conversion rates. This happens when you introduce changes that interrupt the habits of your users.
Let’s say you’ve used the same navigation for a while. Even if you test a version that’s truly better, conversion rates will likely drop in the short term. Once people become familiar with your site, they don’t actively look through the navigation each time. Whenever they need something, they know right where to click. And anything that interrupts that habit will slow them down (at least in the short term). Tests that interrupt established habits will perform worse in the short term.
So how do we deal with these pitfalls? Even if you have a massive amount of data to work with and can establish statistical significance fairly quickly, give yourself more time if you suspect “newness” might be impacting the results. A couple of weeks will do it. And if time is of the essence, look at the trend lines of your conversion rates. Are they getting closer to each other? If they are, you might be looking at the “new” effect. If they look stable and you have a solid week’s worth of data, you’re good to go.
Track Your Entire Funnel
When testing improvements on how you acquire customers with your marketing funnel (making a change to your homepage falls into this group), be careful about only tracking the conversion rate for the next step. On a regular basis, you’ll find something that increases the next-step conversion but LOWERS the conversion rate for the entire funnel. So if you’re not tracking your A/B tests through the entire funnel, you might slow your customer growth by accident.
You’ll need customer analytics to do this. Regular A/B testing tools like Optimizely and Visual Website Optimizer only track the next step.
Kissmetrics has integrations with Optimizely and Visual Website Optimizer to help you avoid this trap.
But there is ONE exception to this. If you’re just getting started and don’t have much data to work with, you might only be able to test changes at the top of your funnel. You won’t have enough data to go any further.
For example, you might have enough traffic to measure free-trial sign ups from your home page but not enough data to track which version gives you more paid subscribers. If you spot a potential roadblock at the top of your funnel, don’t let a lack of data at the bottom get in your way of trying to fix it.
Data is Always Changing
What’s true today may not be true tomorrow. Market’s shift, customer needs change, your business model evolves.
So if you run a test today, you might get completely different results 6 months from now. When you find the “perfect” version, it won’t stay perfect. Everything has a half-life and the only way to stay on top is to periodically refresh by running another batch of tests.
What about seasonality?
Every business experiences fluctuations throughout the year, some more than others. In some cases, seasonality is blatantly obvious. Apple’s best quarter is the holiday season. Toy companies also bring home the majority of their revenue during November and December.
But seasonality can impact our results in more subtle ways.
Take the B2B market for example. Doesn’t seem like a candidate for seasonality right? Well, I reached out to several of our Kissmetrics customers to get feedback on a new feature we were building. Usually, I get a 75-90% response rate. But last August, only 1 out 10 replied. I was shocked. Was it the copy in my emails? What did I say to get such bad results?
It had nothing to do with my emails, everyone was on summer vacation. Over the next 2 weeks, just about all of them got back to me once they returned to the office.
The same thing can happen to your tests too, regardless of your industry.
But don’t use seasonality as an excuse. We all love taking credit when things go well but avoid it like a deathly plague as soon as things aren’t so rosy. Don’t blame seasonality unless you have strong evidence to back it up. Just keep an eye out for it.
Multivariate Tests and Other Hoopla
If you spend much time in the conversion optimization space, you’ll hear about these fancy schmancy things called multivariate tests. Basically, they let you test dozens of variables all at the same time to find the ideal landing page, home page, checkout page, etc.
Sounds great right? Here’s the rub: you need a MASSIVE amount of data before these become a viable option.
They also take a ton of time to set up and manage. Until you become an A/B testing pro and have an entire team that can hunt for optimizations around the clock, multivariate tests just aren’t worth the effort.
There are also other testing algorithms that get WAY more complicated than what most people need. Things like this will just get in the way because you’ll spend too much time trying to get started. Don’t sacrifice action for complexity.
If you’ve never run an A/B test before, you’re in Stage 1 and there are several optimizations that will grow your company by 5-20%. But to find them, you can’t just test random stuff.
Instead, use this 3-step A/B testing process:
- Use customer feedback to get your hands on qualitative data
- Predict which optimizations will make the biggest difference
- Run an A/B test to see if you’re right
Once you’ve found your big-wins, it’s time to start Stage 2 of your optimization plan. At this point, dig in for the long-haul, build a growth team, and start hunting for every little conversion increase you can find.
To make a dent in your growth, you’ll need to find hundreds of these little guys over the course of a year. To support tests at such a high frequency, you’ll need to be acquiring hundreds of customers every month, preferably thousands.
When you get into the meat of your tests, don’t forget these common pitfalls:
- Get statistical significance
- Be careful of the “new” effect
- Track your entire funnel
- Your data will always be changing
- Stay simple and don’t use things like multivariate tests unless you have a good reason
About the Author: Lars Lofgren is the Kissmetrics Marketing Analyst and has his Google Analytics Individual Qualification (he’s certified). Learn how to grow your business at his marketing blog or follow him on Twitter @larslofgren.