sábado, 22 de diciembre de 2012

CRO: data driven tests are 7.7 times more effective than UX tests

Posted 21 December 2012 09:37am by ian mccaig with 3 comments

The 21st century marketer needs an extensive toolkit. As well as the 'standard' skills of creativity, organisation and management, these days they also need to be web literate, social media savvy and equipped with basic data science skills. 

Amongst all of these areas of technology competence one that is growing in importance, but is perhaps still misunderstood, is website testing.    

Testing is the new intuition in site development and optimisation. Rather than relying on hunches, the modern web marketer will test potential changes to their site before deploying them thus, we are led to believe, ensuring their efficacy. 

However, if all changes are now tested, how come we don't all have perfect sites? If testing only tells us the truth, how come we still sometimes go down dead ends? 

The answer lies not necessarily in the tests, but in the ways that they're applied. We've seen thousands of testing processes run across a huge variety of sites and what's struck us is that the issues that led to unsuccessful tests were common across industries.

Good tests and bad tests

Perhaps the single most common reason for tests failing is the motivation behind their derivation.  

We divide tests into two types. The first is the data driven test, where you use data to understand user behaviour on the site and then form a hypothesis to test. The second is what we call a UX driven test, where someone has an idea and decides to test it. 

We knew anecdotally that UX driven tests were generally less successful but, in the name of best practice, we decided to test this assumption. What we found surprised even us. 

Data driven tests had a true positive impact 77% of the time, a pretty decent return you could argue. UX driven testing, however, fared less well – delivering a true positive impact just 10% of times. 

Therefore, data driven tests are 7.7 times more effective than UX tests.

Why do tests go wrong?

Why is this? One reason is to do with organisational dynamics, in particular the dominance of HIPPOs (highest paid person's opinions).

Too often, decisions about what to change on a site are based not a rigorous analysis of the data, but on what the highest paid person in the room thinks ought to be changed. Site owners need to recognise that data, and not subjective opinion, are what drives successful change.

Opinions, however, are by no means the only problem. The most obvious problem that many face is a lack of meaningful data. Successful testing requires access to huge amounts of well structured information in order to ensure quality results – if you put trash in you get trash out. 

Many site owners lack this quality and scale of data as a result of the tools they're using and, as a result, will be stuck with misleading test outputs.

Even when you have the data, inaccuracies can still sneak into tests through poor statistical models. For example, adding additional variations to an AB test serves to double, or triple the time required to get the results and can often result in a less valid outcome, unless an appropriate adjustment is made the 'win criteria'. 

The only way to ensure that testing results are valid is by following rigorous, scientific methods – even if you put in the right ingredients, a poor recipe will produce a terrible dish!

Finally, tests are often undermined by poor processes or systems. The legacy analytics platforms that many sites have in place are simply not configured to deliver valuable insight. 

They come from an era where reporting, and not understanding, were the objectives of analytics and so they can fail to deliver when pushed towards a testing role. 

Even where well configured testing tools are in place, these are rarely integrated with other systems like data capture and analytics, opening up more opportunities for valid results to fall through the gaps.

Making testing work

So, how can the diligent web marketer ensure testing success? We think there are three key parts of a process that each test needs to fulfil in order to maximise the likelihood of success.

1. Good tests are based on diligent analysis

Ignore the HIPPO and analyse the data to develop an optimisation hypothesis that the numbers indicate is likely to be successful. 

UX driven tests can deliver positive results (if only 10% of the time), but that approach means that you're going to be wasting the vast majority of your testing investment.

2. Prioritise your testing

Good tests take time and resource, so make sure you're testing the most important things first. What's the scale of expected impact? Is changing the colour of a button going to have a 10% impact? 

Probably not. Focus on testing something that's going to disrupt the user journey as this could have a significant impact either way. 

As part of this prioritisation, you need to take into account the length of your test and the amount of time the changes will take. A longer test will deliver better results, but it will also consume resource and potentially delay positive changes. 

Use a testing duration calculator to optimise your testing length. 

You also need to consider how long changes will take to implement – changing an entire page can be a lengthy development task and that's before you even consider things like cross-browser testing. 

Think about prioritising quick hit wins, rather than systemic changes to maximise positive outcomes.

3. Think about ROI

Focus on the tests that increase revenue, not just pageviews (unless you're an ad-funded publisher). Clickthrough and traffic are great, but you need to prove a link to revenue for your tests to be delivering real value.

Once you have these processes in place, the final part of the testing mix is to ensure that you have the technical infrastructure to hypothesise, test and deploy in one seamless cycle. You want to pilot a test using a 50:50 split and then roll it out as an always on campaign without the need to rebuild or recode.

Test your tests now

The rise of data-led online marketing means that testing is only going to become a more vital part of every web marketer's job. Moreover, the 21st century marketer knows that a 7.7 times increase in positive outcomes is a win that can't be ignored.

With that in mind, it's worth ensuring that the testing process you have in place is robust, rigorous and process oriented. 

Additionally, having a technology toolkit that allows you to benefit from that huge success uplift is vital.  Without these assets aligned, testing could become a recipe for time wasting and failure rather than optimisation and success.

Ian McCaig is Founder at QuBit Group and a guest blogger on Econsultancy.

No hay comentarios:

Publicar un comentario