A/B testing has long been a favorite tool of growth hackers, and the practice is catching on among marketers everywhere. As companies invest more in creating a seamless online experience, they’re willing to invest more in making sure that experience is fully optimized.
Yesterday we teamed up with the data nerds at Optimizely to talk about how companies can move toward a scientific approach to their A/B testing. If you missed it, you can download the slide deck or watch the full event here:
Here’s what you missed
Optimizely kicked it off with some…
They addressed some of the common misconceptions that people have about A/B testing:
- Validation of guesswork (i.e., “Design thinks A, marketing thinks B. Let’s do an A/B test to see who’s right!”)
- Consumer psychology gimmicks (i.e., “Red buttons get more people to click.”)
- Meek tweaking (i.e., “A series of incremental improvements will grow my business.”)
A/B testing is the practice of conducting experiments to optimize your customer experience. On high-impact pages, the return on time can be huge and more and more marketers are tapping into the power of A/B testing.
think about goals and build a strategy around them to get the most out of testing. no guessing. no gimmicks. continuous #ScienceOfTesting
— Brian Crumley (@briancrumley) July 24, 2014
A/B testing is not…a waste of your time..impossible to get right…or out of scope of your job! #ScienceOfTesting
— Jimmy (@JayCohh) July 24, 2014
Don't think of AB testing as something someone else does, think of it as a core part of what you do #ScienceOfTesting
— Roma Wilson (@blueswilson) July 24, 2014
Step 1: Analyze data
Anyone who has done any amount of A/B testing knows that the disappointment doesn’t come from having your assumptions proven wrong, but rather from high numbers of inconclusive tests. Asking the right questions is surprisingly difficult.
"Asking the right questions is hard" struggle with this daily! #scienceoftesting
— Katie Nelson (@katheliznelson) July 24, 2014
Form your A/B test hypotheses based on the data, not on guesses or gut feelings #scienceoftesting
— Tim Lamont (@TimLamont) July 24, 2014
The good news it that your data can point you to the tests that will have the highest impact. Quantitative data in the form of web traffic, email marketing, order history, etc. is useful in helping you identify where your test will have the great impact on business results. Qualitative data in the form of user testing, heat mapping, or survey data is great for helping you identify what elements of a page should be tested.
Step 2: Form a hypothesis
Once you know what needs to be tested, the second step is forming a good hypothesis. A good hypothesis is made up of three parts:
- Variable: the element being modified
- Result: the predicted outcome
- Rationale: what assumption will be proven wrong if the experiment is a draw or loses?
2) Form a Hypothesis. Isolate one variable, define a result/outcome, and determine what assumption is wrong if test fails #ScienceOfTesting
— Brian Crumley (@briancrumley) July 24, 2014
Forming a good hypothesis is foundational for effective A/B testing. If you want to get into the details on this topic, it’s worth reading this post.
Step 3: Construct an experiment
Once you know where your test will have the most impact and have determined your hypothesis, it’s time to get your hands dirty and construct an experiment. Every website test will contain at least one of these three core elements:
- Content: what you’re saying
- Design: how it looks
- Tech: how it works
The most effective tests often combine all three elements.
Let's get exampled. #ScienceOfTesting
— Weselo Gowedo (@WeseloGowedo) July 24, 2014
While A/B testing is often used for simple things like copy changes:
It can be used for complex business processes as well. Currently, we’re running an A/B test to identify the sales process that delivers the optimal experience for prospects:
Step 4: Evaluate results
Now, for just a little bit of statistics 101. For every experiment you run, you want to be sure that the observed change was not due to chance. Statistical significance provides that indicator. For example, test results with 95% statistical significance have only a 5% chance of the change being due to chance.
What this means for the tester is that significance is a matter of risk. Higher confidence means a lower chance that you’ll implement the winning test result and realize the A/B test didn’t predict actual outcomes. It works something like this:
If you’re running A/B tests manually, Optimizely has a handy calculator that any one can use to analyze test results.
Getting your team on board with A/B testing
A/B tests focused on website optimizations will get results, but the impact of tests grow with greater investment.
Some ways to get people in your organization excited about testing (and willing to pitch in some resources) include:
SO important: When you are testing be sure to put results and tests in a central repository so others can learn #ScienceOfTesting
— Jimmy (@JayCohh) July 24, 2014
Next Steps
A/B testing is a powerful tool to improve your customer experience. Several attendees had questions about how they could keep learning about A/B testing. We recommend the following:
Of course, we would also recommend scrolling back up to the top of the page and watching the webinar you missed. These people agree.
#ScienceOfTesting – doing a webinar on A/B testing and just cleared up so many little things for me.. Info like this is golden @RJMetrics
— Marli Espinales (@Marli_E) July 24, 2014
Love hearing about @RJMetrics split testing their sales process #scienceoftesting
— Steve Mayernick (@EhStayBan) July 24, 2014
4) Analyze Results… gotta bounce early from #ScienceOfTesting, but great stuff so far!
— Brian Crumley (@briancrumley) July 24, 2014
@rjmetrics I lied… still tuned in 🙂
— Brian Crumley (@briancrumley) July 24, 2014
@RJMetrics @Optimizely Great presentation! #ScienceofTesting
— Jamie Byrd (@ReliantAcademy) July 24, 2014