ab-testing

Good data analysis is the search for cause: attempting to uncover why something happened. Traffic to the website is low—why? Our email click through rate is improving—is it because we recently redesigned our email template, or because we’re focusing on more direct calls to action? The best way to find these answers is to rely on the same approach that scientists have used for centuries—experimentation.

As technologist Scott Brinker advises: “Experimentation is the gold standard of causation.” A thoughtfully crafted experiment allows you to zero in on the variables that influence your data. Instead of retroactively analyzing your data, you isolate your assumption and design an experiment that will allow you to test it. These tests start with a hypothesis.

State your hypothesis

A hypothesis is a predictive statement, not an open-ended question. A good A/B testing ahypothesis will invite you, through research, to identify a potential solution. Let’s look an example of an experiment that RJMetrics ran on their website.

In a pricing page experiment, RJMetrics’ hypothesis was informed by qualitative data on how visitors were interacting with the web page. They used Crazy Egg to produce a heat map that showed high and low-activity parts of the page:

Screen Shot 2014-02-06 at 2.13.03 PM

Stephanie Liu, front-end developer at RJMetrics and Optimizely’s Testing Hero of the Year, crafted the following hypothesis:

My hypothesis was that moving the button into the white hot scroll map area would cause the design to have a higher conversion rate as compared to the original pricing page. More people would pay attention to the button simply because their eyes would be lingering there longer.

Here’s her original version:

Pricing_variation1a

Here’s her variation:

pricing_variation1b

Stephanie’s experiment proved her hypothesis to be correct, and her improved pricing page resulted in a 310% improvement in conversions on the pricing page—a staggering win, due to diligent use of data and a well-formed hypothesis.

The Inspectable Elements of a Hypothesis

Let’s boil down a hypothesis to its individual components. Data fits into the hypothesis framework in a number of areas.

If _____[Variable] _____, then _____[Result]_____ [Rationale].”

The Variable: A website element that can be modified, added, or taken away to produce a desired outcome.

Use data to isolate a variable on your website that will have an impact on your performance goals. Will you test a call to action, visual media, messaging, forms, or other functionality? Website analytics can help to zero in on low-performing pages in your website funnels.

Result: The predicted outcome. (More email sign-ups, clicks on a call to action, or another KPI or metric you are trying to affect.)

Use data here to determine what you’re hoping to accomplish. How large is the improvement that you’re hoping for? What is your baseline that you’ll measure against? How much traffic will you need to run an A/B test?

Rationale: Demonstrate that you have informed your hypothesis with research: what do you know about your visitors from your qualitative and quantitative research that indicates your hypothesis is correct?

Use data here to inform your prediction: quantitative insights can be very helpful in formulating the “why.” Your understanding of your customer’s intent and frustration can be enhanced with an array of tools like surveys, heat maps (as seen above), and user testing to determine how visitors interact with your website or product.

Strengthening your Hypothesis

Not all hypotheses are created equal. To ensure that your hypothesis is well-composed and actionable, use a few of the following tips. Here are some examples of strong and weak hypotheses:

Strong Hypothesis Weak Hypothesis
“If the call-to-action text is changed to “Complete My Order,” the conversion rates in the checkout will increase, because the copy is more specific and personalized.”

This hypothesis is strong because of its specific variable to modify (CTA text) and rationale, which indicates an understanding of the audience for the page.
“If the call-to-action is shorter, the conversion rate will increase.”

This hypothesis is weak because it is very general, and does not include a rationale for why the proposed change would produce an improvement. What would be learned if this hypothesie
“If the navigation is removed from checkout pages, the conversion rate on each step will increase because our website analytics shows portions of our traffic drop out of the funnel by clicking on these links.”

This hypothesis is strong because it is supported by website analytics data that highlight a high-impact opportunity for streamlining the checkout process.
“If the checkout funnel is shortened to fewer pages, the checkout completion rate will increase.”

This hypothesis is weak because it is based on the assumption that a shorter process is better, but does not include any qualitative or quantitative data to support the prediction.

A strong hypothesis is:

Testable. Can you take action on the statement and test it? Keep your predictions within the scope of what can be acted upon. Avoid pulling multiple variables into the statement—a more complex hypothesis makes causation more difficult to detect. For instance, don’t change copy on multiple parts of a landing page simultaneously.

A learning opportunity, regardless of outcome. Not every experiment produces an increase in performance, even with a strong hypothesis. Everything you learn through testing is a win, even if all it does is inform future hypotheses.

That brings us to our next tips for using hypotheses:

Hypothesize for every outcome. One of our solutions partners, Blue Acorn, mentioned a hypothesis best practice that we think is fantastic. To ensure that every experiment is a learning opportunity, think one step ahead of your experiment. What would you learn if your hypothesis is proven correct or incorrect in the case of a variation winning, losing, or a draw?

Build data into your rationale. You should never be testing just for the sake of testing. Every visitor to your website is a learning opportunity, this is a valuable resource that shouldn’t be wasted. RJMetrics recently wrote a tutorial on how to use data to choose and prioritize your tests, you can check it on the Optimizely blog.

Map your experiment outcomes to a high-level goal. If you’re doing a good job choosing tests based on data and prioritizing them for impact, then this step should be easy. You want to make sure that the experiment will produce a meaningful result that helps grow your business. What are your company-wide goals and KPIs? Increasing order value, building a revenue stream from existing customers, or building your brand on social media? If your experiments and hypotheses are oriented towards improving these metrics, you’ll be able to focus your team on delving into your data and building out many strong experiments.

Document your hypotheses. Many website optimization experts document all of the experiments they run on their websites and products. This habit helps to ensure that historical hypotheses serve as a reference for future experiments, and provide a forum for documenting and sharing the context for all tests, past, present, and future.

Now, Build Your Own

A hypothesis is a requirement for anyone running A/B tests and experiments on their website. When you build your own hypotheses, remember to:

  1. Clearly define the problem you’re trying to solve, or metric you’re looking to improve
  2. Bring quantitative and qualitative data into the hypothesis
  3. Test the hypothesis to strengthen and ensure it is actionable
  4. Look at every experiment as a learning opportunity

If you need some extra help check out our ebook, Building your Company’s Data DNA for more tips on how to build data-driven hypotheses.

mobile-app-changes

Getting found in the iOS app store is a challenge, with more than one million active apps vying for users’ attention. App publishers and developers have a number of obvious marketing tools at their disposal, like advertising and pay-per download, to get more people to notice their mobile apps. But these are costly and not for everyone.

Beyond the obvious advertising tools, the iOS app store has another, often overlooked, way to promote discovery: app price changes. When a publisher or developer lowers the price of a paid app it gets added to Apple and third party RSS feeds that are distributed to thousands of sites and twitter feeds focused only on promoting apps that have gone on sale or have recently become free.

How it works

This marketing tool, more akin to merchandising, requires little to no budget but, according to our analysis of all iOS apps during most of 2013, it has a significant impact on positioning in Apple’s Top Paid and Top Grossing ranks. This directly translates into better visibility, downloads and revenue.

In fact, as can be seen in the graph below, compared to paid apps that never changed their prices, paid apps that made such changes (both increases and decreases) grew the average number of days they were ranked by 21% in Top Paid (+9 days) and 70% in Top Grossing (+16 days). These apps also improved their average rank by 20% in Top Paid (-45 positions) and 19% in Top Grossing (-46 positions). These improvements were not only for the most popular iOS apps, but also for less established new apps and poorly performing apps that have been around for a while.

the impact of price changes vs no change

The number of price changes, whether increases or decreases (including to $0) also matters. 1 or 2 changes during a year provides very limited improvements. But when changes are made once per month (12 total), improved rank and the number of days ranked healthily. Increase that number to 1 per week (52 total) or more and that’s when developers started to see the largest improvements to app ranks and thus downloads.

Applying this to your app

Here are the key rules that mobile app publishers and developers should follow when developing their price marketing strategy:

Repeat Frequently

All paid apps should look to go on sale, on average, at least once per month. With the corresponding price increase, that makes 24 price changes per year. More experienced app developers and marketers can look to do more to maximize downloads, including intraday changes to target specific countries or types of users, but 1 per month is a good start for most apps.

Allow Settling Time

Price changes can take anywhere from 20 minutes to more than 15 hours to spread throughout iTunes’ storefronts (New Zealand is usually one of the first then it follows time zones to reach European storefronts and the US). In addition, it can take time for users to discover the new price, either directly or through a third party site like AppShopper. So unless you are looking to make multiple price changes a day, which can be rewarding but requires constant attention and/or the right tools, most publishers should let their app’s sale breathe for 48 to 72 hours.

Focus on Down Cycles

Given the cyclical nature of downloads and ranks, price changes should generally not be made when the app is experiencing a growth spurt. Instead, the price change should be timed with an app’s slowing downloads or sagging rank.

React to Competition

If your app is a soccer app at $2.99 and EA’s FIFA 2014 goes from $4.99 to $0.99, you need to react immediately, in order protect your positioning and sales. If this example does not directly apply to you, remember that competitors are not just direct competitors. They may also be apps ranked just above you in your genre or category, or those appearing before you in key searches on iTunes.

Avoid Predictability

Varying the times, days of the week and the amounts of your price changes will avoid predictability that could be gamed by both competitors and users.

Test Often

Every price change should be an opportunity to test a new price and new price steps. That may not always be possible if you are at $0.99 and going free. But even then you should be testing various target prices (the price you go to after a sale). Here are examples of variations in price changes:

  • The price of your app is lowered to varying tiers in 1 or 2 steps (e.g. $3.99 -> $0.99 or $3.99 -> $0.99 -> $0.00)
  • Then the price is increased in 1-3 steps (e.g. $0.99 -> $3.99, $0.99 -> $4.99 -> $3.99, $0.99 -> $1.99 -> $3.99)

Pricing changes are a simple, effective way to get your app in front of people. You can make these changes yourself, or if you’re looking for some extra assistance, talk to us. The Loadown can help you automate this exact type of optimization.

optimize

Always be testing should be the mantra of every ecommerce store. Incremental improvements on your homepage, product page, or click-through-rates have a snowball effect on your bottom line. Today’s guest post is from, Sean Ellis, CEO of Qualaroo and founder of GrowthHackers.com. Sean has held marketing leadership roles with companies including Dropbox, LogMeIn, Uproar, and Eventbrite. He literally wrote the guide to conversion rate optimization. Read on to hear what Sean has to teach you about optimizing conversion rates to find sustainable ecommerce growth.

Growing an ecommerce business is hard. But what if I told you that the answer to your growth challenges is right in front of you? Conversion rate optimization is critical for any business, but none more so than in ecommerce—where each conversion improvement results in immediate improvement in sales.

But CRO can also be a frustrating, fruitless practice, leading many ecommerce managers to abandon it in search of other opportunities for acquiring new visitors. In my experience, CRO is the most powerful lever you have to improve your ROI and overall site performance. It has the ability to turn unprofitable traffic into profit centers, and delivers sustainable growth that compounds itself over time.

Continue reading

repeat-customers

We talk about repeat purchases a lot on this blog. We talk about it so much that today we’re thrilled to have Nima Patel from Lettuce on our blog to talk about it for us. From the company that makes order management fun, here are two strategies to get more repeat customers.

By now you’ve heard that repeat customers are an incredibly valuable segment for any ecommerce business and should not be ignored. Although that’s easy to understand at a high level, getting your customer base to buy more often isn’t so simple to implement. Monetary and points based rewards programs are standard, but what if we told you today that there were ways to earn deep customer loyalty without having to resort to generic financial incentives?

It’s definitely possible.

Continue reading

The post below was submitted to us by nomorerack, a fast-growing online shopping destination with an avid team of RJMetrics users.  To see what RJMetrics can do for you, get started with our 30 day free trial today.

At nomorerack.com, our goal is to be the go-to online shopping destination for those who want quality brand name apparel and accessories for up to 90% off retail. A key to achieving that goal is having a deep understanding of our customers’ behavior.

In this post, we outline our methods for maintaining a consistent, deep understanding of our customer base that evolves with our data.

Quest for Customer Insights

Our long-term success is strongly dependent on client satisfaction. We’re focused on making sure that our customers keep coming back, refer their friends and help our community grow.

To better understand our customer base, we wanted to use important metrics like revenue per user (RPU), time between purchases, and cohort analysis. It was critical to us that we be able to access these metrics on-the-fly as our data changed and segment them by things like acquisition source. Understanding the returns we see from different channels is critical because it tells us which avenues are most effective and where we should be directing our resources.

To address these needs, we went looking for an analytical tool that allows non-technical team members to pull frequently updated reports and run queries via a simple user interface. It was also important that we get up and running as quickly as possible. We looked to the cloud.

Cloud Business Intelligence

A quick search led us to RJMetrics, which provides hosted data analytics software. We reached out to them and signed up for a free 30 day trial in which we asked to measure those key metrics like RPU, lifetime revenue (LTV) and repeat purchase patterns.

Vishal Agarwal, our Director of Business Development, signed up for RJMetrics on a Friday and was running these critical reports by Tuesday of the following week. By Thursday, our whole team was trained on RJMetrics’ system. Within a week of signing up, we were already saving many hours that were previously spent on report generation and data exploration.

Another unexpected plus came as a result of RJMetrics’ experience in working with e-commerce companies like ours. RJMetrics has developed a suite of best-practices metrics that are readily available out-of-the-box. Through cohort analysis, we are able to group customers by their registration dates and analyze their subsequent purchases over time on a single chart. This exercise was brand new to our team and would have taken us hours to build in Excel.

We knew this subjectively but the cohort chart confirmed that we had an amazingly loyal customer base. Customers acquired in November 2010 have continued to spend the same amount with us month on month, right till date. This was extremely encouraging evidence that our customers love our products and are far more valuable than just the amount of their first purchases.

While we were very focused on acquiring new subscribers, what was very surprising that 70% of our revenue always came from existing customers.

RJMetrics also helped us optimize marketing dollars. Their “repeat purchase probability” and “average time between purchases” metrics helped us in planning email triggers and targeting specific audiences within our user base. We also learned that only 5% any given day’s sales came from users who registered on the same day, which encouraged us to place increased focus on converting new users.

To share these metrics internally, we leveraged the “syndicated dashboards” feature in RJMetrics. This feature allows us to share common dashboards such as “sales,” “supplier” and “marketing,” with different teams internally. This way, management can clearly communicate with key teams through one set of metrics. No more exchanging multiple emails with messy excel spreadsheets and end of the day reports.

 New Insights Every Day

Once we started digging into our data using RJMetrics, we realized that its scope is much wider than just calculating cohorts or LTVs. RJMetrics became a one stop shop for all of our data needs – from basic revenue reporting to the more complex analysis of ancillary data sets.

The beauty of RJMetrics is that it can incorporate any data that lives in our backend database. Every time we start tracking a new data field, RJMetrics can incorporate it into our hosted data warehouse, and we are able to start charting it in a matter of hours. For example, we just started analyzing customer surveys and not only are we able to analyze customer satisfaction and chances of repeat purchase, but we are also linking this data to its respective products and vendors. This allows us to measure company performance through suppliers, products and deal campaigns. In other words, our customers are now actively defining what we sell.

Conclusion

We chose to rely on a third-party service to enable the analysis of our backend data and we are thrilled with the results. Rather than re-invent the wheel, we left it to the experts at RJMetrics and have been able to reap the benefits extremely quickly.