viernes, 31 de enero de 2014

Multi variant testing

What is Multi variant testing?

Multi variant testing is a process by which any number of components of a webpage may be tested. In simple terms, it allows numerous A/B tests performed on one page at the same time. Multi variant testing can theoretically allow for limitless combinations of tests.

The only limit for the number of combinations – or the number of variables – is the amount of time it will take to get a valid sample of visitors.


What is A/B (Split Testing)?

A/B testing (also know as Split Testing) compares the effectiveness of two versions of a web page – in order to discover which has better response rate, better sales conversion rate, etc.

As an example, the purchase funnel on an e-commerce site is typically a good candidate. Any improvements in drop-off rates can represent additional sales.

Why would I want to do MVT?

Put simply, everyone wants more conversions! There is always a purpose of any website. Whether you're running a blog and wanting to get more subscriptions for your RSS feed, or you're running a multi-national worldwide e-commerce website aiming to increase sales – then you'll need to test your site. Constantly testing different aspects is the only way to stay one step ahead and increase conversions.

Steps to take when thinking about testing your site

Smashing Magazine have some interesting thoughts about Multivariate Testing which still stands true. They break it down into 5 steps.

  1. Identify a Challenge
  2. The Hypothesis
  3. A/B or Multivariate Testing?
  4. Running the Test and Analyzing Results
  5. Learn From the Test Results

 

The important thing to remember is the level of detail you need from your test. Is this a sweeping change – comparing one design for example with another – or is this a more granular test – checking the colour of your "add to basket" button.

Once you've done that, determine what influences conversion rate. Do your research. Collect stats. Don't just "guess".

Website test results don't always add up

So, you run a test. You see an uplift of 20% in conversion. You run another, potentially unrelated test. You get a 5% increase in conversion. So, you roll both tests out your customers, and you don't see a 25% increase in conversion. What went wrong?

There are lots of different reasons this can happen.

  • The changes may have affected the same group of people – You need to make sure that you run tests in isolation, or if you don't, make sure that you are aware that multiple tests may conflict with each other
  • Making other changes on the site – A/B tests only affect one page. If you're making other changes on the site at the same time as you're running tests, you're going to have a bad time
  • Merging changes from multiple tests. Imagine doing two tests. Test one, you test a blue background against a white background. Test two, you test a blue button against a white button. Blue wins both times. You wouldn't add a blue button on a blue background, would you?
  • Did you test tests enough? – When you're working with a very small number of visitors, the behaviour of even one user can drastically throw off your statistics
  • Your test was wrong – Check your test for bugs. Make sure that however you're testing, you've seen all the variations and there are no problems. As a test, run an A/A test and see if you get the results you expect

To find out more about Conversion Rate Optimisation for your website, please contact us.

BY Douglas Radburn AT 11:08am ON Tuesday, 16 October 2012

Doug is our Senior Web Developer, and all round development expert. Having gained some informative insight and technical experience at two major digital agencies after graduating; Doug brought his knowledge and skills to Branded3 in 2009, and has been solving our development dilemmas ever since.

No hay comentarios:

Publicar un comentario