A/B testing has long been a favorite tool of growth hackers, and the practice is catching on among marketers everywhere. As companies invest more in creating a seamless online experience, they’re willing to invest more in making sure that experience is fully optimized.

Yesterday we teamed up with the data nerds at Optimizely to talk about how companies can move toward a scientific approach to their A/B testing. If you missed it, you can download the slide deck or watch the full event here:

Here’s what you missed

Optimizely kicked it off with some…

Screen Shot 2014-07-23 at 3.09.21 PM

They addressed some of the common misconceptions that people have about A/B testing:

  • Validation of guesswork (i.e., “Design thinks A, marketing thinks B. Let’s do an A/B test to see who’s right!”)
  • Consumer psychology gimmicks (i.e., “Red buttons get more people to click.”)
  • Meek tweaking (i.e., “A series of incremental improvements will grow my business.”)

A/B testing is the practice of conducting experiments to optimize your customer experience. On high-impact pages, the return on time can be huge and more and more marketers are tapping into the power of A/B testing.

Step 1: Analyze data

Anyone who has done any amount of A/B testing knows that the disappointment doesn’t come from having your assumptions proven wrong, but rather from high numbers of inconclusive tests. Asking the right questions is surprisingly difficult.

The good news it that your data can point you to the tests that will have the highest impact. Quantitative data in the form of web traffic, email marketing, order history, etc. is useful in helping you identify where your test will have the great impact on business results. Qualitative data in the form of user testing, heat mapping, or survey data is great for helping you identify what elements of a page should be tested.

Step 2: Form a hypothesis

Once you know what needs to be tested, the second step is forming a good hypothesis. A good hypothesis is made up of three parts:

  1. Variable: the element being modified
  2. Result: the predicted outcome
  3. Rationale: what assumption will be proven wrong if the experiment is a draw or loses?

Forming a good hypothesis is foundational for effective A/B testing. If you want to get into the details on this topic, it’s worth reading this post.

Step 3: Construct an experiment

Once you know where your test will have the most impact and have determined your hypothesis, it’s time to get your hands dirty and construct an experiment. Every website test will contain at least one of these three core elements:

  • Content: what you’re saying
  • Design: how it looks
  • Tech: how it works

The most effective tests often combine all three elements.

While A/B testing is often used for simple things like copy changes:

Screen Shot 2014-07-24 at 5.36.38 PM

It can be used for complex business processes as well. Currently, we’re running an A/B test to identify the sales process that delivers the optimal experience for prospects:

Screen Shot 2014-07-24 at 5.36.52 PM

Step 4: Evaluate results

Now, for just a little bit of statistics 101. For every experiment you run, you want to be sure that the observed change was not due to chance. Statistical significance provides that indicator. For example, test results with 95% statistical significance have only a 5% chance of the change being due to chance.

What this means for the tester is that significance is a matter of risk. Higher confidence means a lower chance that you’ll implement the winning test result and realize the A/B test didn’t predict actual outcomes. It works something like this:

Screen Shot 2014-07-23 at 4.47.55 PM

If you’re running A/B tests manually, Optimizely has a handy calculator that any one can use to analyze test results.

Getting your team on board with A/B testing

A/B tests focused on website optimizations will get results, but the impact of tests grow with greater investment.

Screen Shot 2014-07-23 at 4.53.08 PM

Some ways to get people in your organization excited about testing (and willing to pitch in some resources) include:

  • Documenting your test results in a central repository where anyone can catch up with what your team has been learning from its tests
  • Building excitement by sharing your wins with the company. Experimentation is fun, the more you share, the more other people will care.
  • Introduce some competition by having people vote on variations. Find out who in your organization has the highest accuracy of predicting winners.
  • Next Steps

    A/B testing is a powerful tool to improve your customer experience. Several attendees had questions about how they could keep learning about A/B testing. We recommend the following:

  • Building Your Company’s Data DNA
  • How to Use Data to Choose Your Next A/B Test
  • The Ultimate Guide to A/B Testing
  • From Testing Newbie to Conversion Optimization Beast
  • Stop Wasting Time on A/B Tests That Don’t Matter
  • Of course, we would also recommend scrolling back up to the top of the page and watching the webinar you missed. These people agree.