The cost of attracting high-value visitors to a website is increasing as sites compete for the same customers.
With online conversion rates in the UK falling by 55% over the past five years the best way to increase efficiency is exploiting existing visitor streams with conversion optimisation.
To coincide with the launch of the 2012 Conversion Rate Optimisation Survey, here are seven tips to boost a website's success...
1. You've got to test to be the best
Don't rely on gut instinct when changing your site. You can obtain meaningful results by randomly displaying alternative content to visitors and measuring how often they reach the desired conversion goal. However, not all testing is the same.
For example, a common mistake is comparing historical data in before-and-after tests. This can lead to different versions being tested over different time frames with fluctuations caused by factors such as advertising, the weather or day of the week.
2. Only clear changes bring clear results
Whether a landing page is light or dark blue is rarely important. Versions with explicit differences must be tested in order to obtain meaningful findings about visitor behaviour.
This approach shouldn't be a way to find out whether apples or oranges are best, but rather apples or fire extinguishers! All elements of a website are suitable for testing whilst completely different designs can also be tested rather than simply replacing individual elements.
It's true that some small differences (e.g. copy) on their own may greatly affect visitor behaviour, but these strong elements need to be identified. The imagery, information, lists, copy and buttons are all very strong elements.
3. Optimise where the biggest effect will be felt
It's worth starting where the highest absolute increase can be achieved. A 100% increase might be possible on the last page of the ordering process however the increase in sales will be minor as a smaller percentage of overall site visitors see this page.
It's far more effective to start with pages with lots of visitors and a high bounce rate such as landing pages.
4. The world is more complex than A & B
Simple split testing with a few alternatives is a good starting point. However, you will quickly reach the stage where more complex test scenarios are needed to obtain meaningful results.
Multivariate tests are useful for refining split tests. During split testing, only individual versions are tested against each other, but not the effect of different elements on each other. In multivariate testing, all possible combinations of alternative elements are tested.
As this can result in 100's of possible versions, high visitor numbers are needed for successful testing. These results are highly worthwhile as they indicate which combinations are the most promising and also how important the individual elements are.
5. Checklists add nothing
Your website has a unique visitor group. No-one can tell you what your specific visitors want and don't want. This applies to all aspects of your website.
Therefore, you have to be careful when applying general recommendations or tips. All changes to a website should be tested. This avoids wasting unnecessary time and expense.
6. Conversion is not the be all and end all
It's important to clearly define individual conversion goals (shopping basket, order process, purchase, contacts, downloads, etc.) and to keep in mind which of these defined goals you are hoping to influence with each optimisation.
Each conversion should also be qualitatively assessed in order to increase the success of conversion optimisation. For example, completeness is relevant when looking at registrations or requests for contact whilst greater importance may be attached to records with postal addresses than to those without.
For brand building campaigns, assessment on the basis of conversions may show a short-term success but prove a competitive disadvantage in the long term.
7. Cheat chance
The test has hardly begun and the conversion rate has already reached unimaginable heights. So why not quickly turn off the old version and just use the new one right away? Don't do it.
Statistics are prone to errors and errors can be expensive as they negate the entire optimisation and ensure that the conversion rate not only fails to rise but may even sink drastically.
A small data pool is the most common error. A test should run for at least a week and two weekends. In addition to this, there should be at least 50 conversions per version over this period. Depending on the target group, the numbers may fluctuate heavily on individual days making a longer test period necessary.
Ellie Edwards-Scott is Managing Director at QUISMA and a guest blogger on Econsultancy.
No hay comentarios:
Publicar un comentario