By Eric Hansen
Founder and CEO, SiteSpect
A common disappointment among companies deploying testing and optimization technology stems from tests that do not yield the radical improvements expected. Has this happened to you? Working with our customers, we’ve seen that sometimes even the most dramatic design changes produce no significant differences in their click-through or conversion rates. While that can be surprising, much of its concomitant disappointment can be eliminated by defining upfront what success or failure means to you.
For example, success in testing can be measured many different ways, depending on your goals and your organization. There is no one optimal way to measure testing success:
- For some, success is a dramatic increase in a revenue-based metric, knowing that most senior stakeholders will respond to incremental revenue;
- For others, success is a small increase in key visitor engagement metrics, knowing that a series of small gains eventually adds up;
- For still others, success is a reduction in the number of problems present throughout the site, knowing that reducing these barriers improves usability;
- For some, especially those with an increasingly dated site, success is simply being able to deploy a new look without negatively impacting existing key performance indicators.
And here’s where it gets interesting. Sometimes our customers think that because they have not experienced what they consider to be success, they have failed. But I don’t think that’s really the case, because testing powers a continual learning process about your visitors and customers.
For example, if a particular image fails to increase conversion rates, you have learned that your audience does not respond to that particular image. In this context, there is no such thing as a failure in testing, only a failure to achieve the specific defined objective.
We know that not every test can yield incremental millions of dollars in revenue for your business. Some tests will fail to produce the change desired; others will yield results but not for your key performance indicators; and still others will simply fail to produce statistically relevant differences. Even so, there are no “failures” in testing other than a failure to carefully design your tests and a failure to carefully consider what you’ve learned. If you have learned what doesn’t work, congratulations – that’s considered a success when testing.
Measuring Testing
One way to measure your testing program is to integrate test data with your web analytics. There is a very natural relationship between testing platforms and analytics applications—one is designed to drive improvements on your site and the other is designed to help quantify the value of those improvements. The combination of testing and analytics should support both optimization and an incremental learning process about your visitors and customers.
The basic integration of testing and measurement systems involves exchanging data about test participation, either through an after-the-fact bulk data loading or through real-time tag transformation. The most fundamental and important data to pass is some kind of test identifier—whatever value your testing application is using to keep track of the test(s) in which your visitors are actively participating.
More importantly, you need to ensure that the data is flowing into the analytics application and can be used to create appropriate metrics and visitor segments necessary for deeper analysis of the test. Simply having an ID to indicate participation in “any test” is not enough. We recommend passing data that will allow identification of the visitor or session level for each test being run. Ideally, your analytics platform will allow you to load test metadata to increase the granularity against which analysis can be performed.
If you’re not able to get this level of granularity, not to worry; you will still be able to benefit from testing. But keep this practice in mind as you upgrade and evolve your measurement technology, and always look for opportunities to dig deeper into your test results through analytics.
Done well, this type of integration allows the measurement team to create segments, build key performance indicators, and drill down into the activity of individual visitors based on test participation. You’ll be able to evaluate your testing efforts over multiple criteria and be able to evaluate test participant behavior over multiple sessions – what better way to measure success or failure?
About the Author
Eric is the CEO and founder of SiteSpect, and the chief architect of the firm’s non-intrusive technology for multivariate testing, behavioral targeting and digital marketing optimization. He is a frequent speaker at conferences covering web analytics and optimization, and writes regularly on topics dealing with the intersection of marketing and technology.
Meet and Learn from Eric!
Eric will be sharing more practical insights on testing and analytics at Conversion Conference 2012 on June 25th and 26th in Chicago. Join him in his session on “Testing+Analytics: A Detailed Formula for Anticipating User Behavior.” See the full agenda and read more about this session. You can also follow Eric on Twitter for some pre-conference networking.
Save $100 when you register with Eric’s discount code CH12308. Take advantage of regular prices before they go up on June 22 – Register now!