Many will tell you that a/b and multivariate testing is rewarding and even addictive for an online business. If done correctly it can help you learn where you can fix problematic areas in your conversion funnel, iterate on designs, identify segments and target content to those segments.
Few organizations reach this state of testing-nirvana though; some grow frustrated by the time and effort it takes to organize testing requests, appease I.T., and deliver results in a meaningful amount of time. More recently I have spoken to merchants with testing platforms fully integrated but who chose to stop or limit testing because of these challenges.
What can you do to avoid testing stagnation? Consider a few tips:
Frame the question you’re trying to answer and prioritize your goals.
75% of the time when I ask a stakeholder “Why do you want to test?” the answer I hear is “To raise our site conversion rates and revenue per visitor.” This is great – but as the optimization guru of your org you need to be intimately aware that your conversion rate and RPV is ultimately a reflection of numerous factors including:
- Your Site Design:buttons, colors, layout, forms, etc.
- Your Pricing/Promotions: free shipping, cross-sell, up-sell, etc.
- Your Relevance to the visitor: Recommendations algorithms, category affinity, search keywords, seasonality, geo-location.
- Your Competition: price competition, promotions they run, etc.
Suggestion:
Take a deep look into each of these areas and guide your teams to develop ideas around what will improve the customers experience and compete more aggressively with your peers in each category. Turn to analytics data, surveys, your co-workers, support teams, or (my own new favorite) focus groups and usability lab input from consumers to figure out where you can improve the customer experience.
Be aware of mobile and tablet users and design for them in your checkout flow. PayPal’s Mobile Express Checkout is a great example of an interface optimized for touch instead of type.
Develop clear hypotheses with measurable results
Once you identify your opportunities to improve on the site, brainstorm solutions to identify how you want to measure if your change is successful. Hint: It’s not always your site conversion rate or revenue per visitor.
Here are a couple of real-life examples to illustrate:
- Before: By improving the layout of our cart page we will increase our conversions.
- After: By moving the location of our call to action on the cart page above the fold it will promote click-thru rates on the cart page to the next page in our checkout funnel.
- Before: By improving our home page navigation bar we will increase conversions.
- After: By improving the positioning and color of our navigation bar, it will improve discoverability, thereby reducing the bounce rate on our home page.
Note: In each case we are not focused on the site conversion rate as our primary success metric, but rather we focus on what Avinash Kaushik has termed a micro-conversion.
Suggestion:
Be able to clearly articulate the changes you are making and how you are gauging success for each test. This sets clear expectations with stakeholders about how your goals are measureable and attainable. Lastly, don’t obsess on site conversion and RPV, by making enough small changes rapidly you can make a big impact down the line.
Overall site conversion is important, but it will be very challenging for us to make a change on the homepage and have it make a statistically significant impact on our site conversion rate because of the room there is for abandonment between the homepage and the order complete. If on the other hand I use a more home-page centric merchant like the bounce rate or engagement as a whole it may be a more accurate predictor of user preference.
Ready for More Conversion-Boosting Tips?
Join the worldwide community of conversion optimizers at ConvCon 2017!
Clear the way so you can execute and analyze rapidly.
We sometimes have a testing vendor changes needed for an a/b test on a merchant site, only to have the code for a test be made obsolete by other changes pushed out by a different team. This kind of organizational disconnect is frustrating but also dangerous, if a site redesign rolls out in the middle of testing it can drastically alter consumer behavior, impacting metrics, and potentially hijacking the JavaScript code being used for the test.
Suggestion:
Create an execution framework that will allow you to test and iterate quickly with minimal distractions. Consult with others making changes to your testing areas to clear out windows of time where site pages remain stable and to ensure that your tests don’t interfere with peak season traffic or promotions.
Secondly, get your hands dirty and learn how to pull a report from your testing tool. Learn what a confidence interval and significance is if you don’t already! Create as many segments as possible for a test and learn how to quickly scan your tool to note if they are significant or not. Check in on your tests at least 2-3x a week to look for abnormalities. Adobe has a nice mobile app for Test&Target that I have used in the past and recommend.
Learn from your results – ask “Why?”
A test generating 15% lift is a wonderful thing, what if your test underperforms by -15% though? Was it a bad idea? Not necessarily…
About the Author
John Amir-Abbassi is a Senior Researcher at PayPal and former Adobe Test&Target consultant who has consulted with over 25 mid to enterprise level e-commerce businesses to successfully execute optimization efforts that leverage a/b and multivariate testing tools.