3 Little Known Problems That Invalidate Your Testing Data

You may have validity problems built into your testing process, and not even know it. Managing an A/B Testing process is challenging. Optimizers work hard on getting insights, validating insights with analysis, developing treatments to web pages, and finally testing. In this post I’m going to tell you about the common validity issues which are an inherent part of running a test (unless you plan for them and prevent them).

Starting Tests Incorrectly and Interrupting a User’s Buying Experience

A lot of people will simply start a test once they’ve developed the pages/elements they want to test. This is the wrong way to do it. I can hear you now… “Wait, what!?” Let me explain why don’t want to “simply start” your tests…

When you just start a test, you’re interrupting a users buying experience. It’s pretty common for people visit your site 3-5 before they complete a purchase. Those 3-5 visits (could be more, could be less) will take place over multiple days (typically). So if a user came to your site on a Wednesday night. And they saw the control version of your homepage. Then, you start a test for a radical redesign on the homepage. When that user returns Thursday morning, they’re having a completely different experience from what they expected. This can potentially frustrate the user, and skew the results of your test.

We definitely don’t want to be making decisions on data that’s been biased this way.

Missed Conversion Conference Las Vegas 2016?

You can still catch up on the latest conversion-boosting strategies from CRO experts.
Watch All 40+ Sessions on Video!

Stopping Tests and Interrupting Buying Cycle

The same problem when you’re stopping a test. If a user is in the middle of their buying process, and the whole time they’ve been seeing a test variation on your site. Then, you get the results you need, and decide to stop the test. That user who was a part of the experiment variation is coming back to the site, and they’re not seeing what they expected to see after the test has been stopped. This can frustrate the user, and create a negative experience interacting with your brand.

Conversions from Mobile Devices are Throwing Off Your Test Results

For many high traffic sites, a large portion of their traffic (50% or more) will come from mobile devices. Often times when you set your test up to run, the traffic going into the test will include traffic from mobile devices. This is a problem. Why? Good question 🙂 Here’s why…

Even for mature companies with a good mobile buying experience, a much smaller number of conversions come from mobile devices (typically), when compared to computers. The conversions that come from mobile devices are be randomly put into a test variation. Because the conversions that come from mobile devices are going to be significantly less than the conversion that come from desktops/laptops, they skew test results. Let me give you an example. If 10 conversions from a mobile device are assigned to one test variation, and no conversion from mobile devices go into any other test variation… that’s enough to throw off the results of the whole test. It’s also pretty common for a testing tool to distribute traffic, and thus, conversions in a lopsided way. You want to make sure that you’re accounting for the different behaviors of mobile users, and you don’t want to let mobile device data skew your test results.

About the Author

gonzalez-vDaniel Gonzalez is a Conversion Rate Optimization Advisor with ConversionIQ.
Daniel has helped Silicon Valley startups increase conversions, and speed up user acquisition. He’s a lean startup practitioner and applies the methodology in his approach to conversion rate optimization. He loves the “ah-ha” moments when he gets insights that reveal the reasons why people buy. He writes a blog on conversion rate “awesome-ization” at www.conversionlove.com

Leave a Reply

Your email address will not be published. Required fields are marked *