A/B testing can give you big wins, but often failed test after failed test leaves ecommerce site managers frustrated and exhausted with little gain to show for all of the effort. So what separates those ecommerce companies that turn A/B testing into a competitive advantage and those that don’t? The secret is in what they test and how they test.

In this article Sean Ellis, CEO of Qualaroo and founder of Growthhackers.com, will show you what the best ecommerce sites do that’s different than everyone else. In addition, you’ll learn how to put those needle-moving conversion optimization practices to work improving your website performance.

Whether you’re a relative beginner at A/B testing and conversion rate optimization (CRO) or running your hundredth test, you’ve probably heard about the huge wins that can result from A/B testing and conversion optimization. In fact, Amazon is hailed as the ultimate conversion optimization machine. But for every story of a big win, you know that there’s real potential for running tests that result in wasted time and resources rather than precious wins.

So how do you ensure that your A/B testing produces more wins than losses? Here are three guidelines to help ensure that your optimization efforts are not in vain…

1. Test for Impact

There’s a dangerous adage in A/B testing that’s taken almost as gospel which says you should only test one thing at a time so you can understand which change impacts conversion. While there are many times where you’ll want to follow this advice, if you’re looking for tests that really move the needle you should ignore it.

Why? Because small incremental tests take the longest to reach statistical significance. Running one small test after another is often a sure-fire way to eat up a lot of time without much to show for it.

If you’re looking to improve conversion rates on the whole, test for impact. Make bigger, more sweeping changes that will give you real data, real fast. Broad changes get to statistical significance faster than minor ones and have the added benefit of finding something that’s exponentially better than what already exists.

The first step to making A/B testing work is to stop being scared of making big departures from your existing experience, and try to find what really moves the needle.

growth

Grow faster

Data-driven tips and how-to’s that help your business go from 0 to 60.


2. Up Your Test Velocity

If you’re like most companies currently running A/B tests, you’re likely running a handful of tests a month. Econsultancy reports that 87% of companies doing A/B testing ran between one and five tests each month in 2013. Simply put, it’s not enough. The best companies are running hundreds of tests a month. Each one of those tests represents learning and optimization that makes them better than the competition—including you. If you’re running 60 tests a year and your competitor is running 1,200, you aren’t just losing by a little, you’re getting trounced.

The secret to success is to up the test velocity on your end. Don’t settle for five tests a month when you can do ten, and so on. While your traffic might not currently support many tests, remember that you can get results faster with bigger tests and can run tests in different parts of your funnel in parallel. For example, you can test value proposition and tag lines in Google Ads while working on a shopping cart test within the product.

In order to get your testing velocity up, you need to know what you’re going to test ahead of time. And you need someone responsible for designing those tests, and getting them ready to go. The time to figure out what to test next is not at the end of the current test.

3. Know What to Test

Stop me if you’ve heard this scenario before. Your ecommerce manager, business analyst, marketer, and product manager hold a meeting to discuss how to optimize your shopping cart or website. You all look at a pile of reports to identify pages with high bounce rates or points in your conversion funnel that have high drop-off rates. After a deep data dive you all brainstorm what to test next and what the priority should be. Sound familiar?

So what (or better, who) is missing from this picture? It’s your customers and potential customers of course. Trying to guess at what to test next is expensive. Instead of guessing what the data means, validate it with website visitor feedback. You might find that what you think is the issue really isn’t, and vice-versa. When you ask your visitors you end up testing what really matters first.

To get the insights that can help you run smarter tests, simply ask the following two questions:

To visitors who did convert ask: What almost prevented you from completing your purchase?

Ask this question on the order confirmation page or via a follow up email survey. Use a freeform response field to ensure you’re not just confirming existing team hunches. By asking what almost prevented your most successful visitors you’ll identify pain points that others weren’t likely to overcome. Addressing these issues can eliminate barriers to other shoppers who weren’t quite as motivated as your current buyers, but who would still convert if you just made it easier for them to do so.

To visitors about to leave your site ask: What stopped you from completing your purchase today? By using on-site surveys that can detect when a visitor is about to abandon your site, you can collect real-time feedback about what is preventing them from converting.

If, for example, they didn’t buy because they weren’t comfortable with the shipping or return policy, you can run messaging and “guarantee” tests to see if those reduce abandonment.

Putting It All Together

When you combine these three element: velocity, impact, and importance, you’ll immediately start to see better returns from your A/B testing and conversion optimization efforts. You’ll eliminate inconclusive tests that suck up a lot of time and resources and leave you right where you started. Instead you’ll start to find winners and losers, and tests that have a real impact on how your site performs and how your visitors convert.

When you base your testing plan on insights about what really matters to your visitors, you ensure that you’re running tests that really matter and that have the best chance of improving conversions, rather than simply guessing at what the data means. With the three pieces in place, your conversion optimization program will finally start to deliver on its promise, leading to better performance from your website and an improved user experience that addresses visitor needs.

If you want to learn more about conversion rate optimization, check out Qualaroo’s Guide to Conversion Rate Optimization. And if you’ve had any big wins recently with A/B testing, share them in the comments.

Twitterad-03

  • http://www.pmorganbrown.com/ Morgan Brown

    I’ve definitely made the mistake of trying to interpret data with the team, endlessly debating what to test and change next, without bringing the voice of the customer to the table through surveys and insights. Just asking and getting that unvarnished feedback sheds so much light on the data. That alone is worth implementing immediately.

  • Jasper Vallance

    A tip to improve the conversion rate of G+1’s on your blog. Make sure the share button is visible! It currently sits off the page as you scroll down.

  • Jorden Lentze

    Hi Erica. Interesting article. I have two questions:

    1) You say: “you can test value proposition and tag lines in Google Ads while working on a shopping cart test within the product.” But how do you then make sure the both tests do not interact / influence each other?

    2) And about testing for impact. I think it is sometimes difficult to estimate the impact of small or large changes to a page. Sometimes small changes have a big impact on the conversion and sometimes big changes have a small effect, or more disappointingly, a negative effect (ofcourse you learn a lot by your failures).

    I find that the quality of the hypotheses is a better predictor of results. What do you think?

    Look forward to your next article

    Kind regards,

    Jorden

    • JanessaLantz

      Jorden,

      This is a guest post from Sean Ellis, but I’ll see if I can answer this on his behalf.

      1.The idea here is to test things that are different enough that they won’t interact with each other. For example, you wouldn’t want to test run an a/b test on a specific call to action while you’re runnning an a/b test on the landing page that call to action is pointing to. It’s likely that any time you’re running multiple tests there is a chance the two are influencing enough, but it’s a risk worth taking.

      2. Great point about the quality of the hypotheses. This isn’t something Sean went into too much in this post, but it’s absolutely important. KissMetrics has a great post about this topic here: http://blog.kissmetrics.com/winning-ab-testing-hypothesis/

      Thanks for commenting!
      Janessa