martes, 18 de septiembre de 2012

Does investment in paid search brand campaigns deliver incremental revenue?

Posted 17 September 2012 10:17am by James Gurd with 7 comments

For this month's post I thought I'd share a practical example of how you can use testing to validate the impact of your paid search campaigns.

This is aimed at client-side digital marketing teams and agency staff who are learning the paid search ropes and might not fully understand the interaction between SEO and PPC.

The example I'm using is a test plan that seeks to answer the question "Does investment in brand keywords cannibalise or deliver incremental sales?" 

This is based on the most common form of paid search, Google Adwords.

This is a common question web managers ask agencies. Many marketers believe that driving transactions and revenue from brand campaigns is easy (that's not strictly true – there are easy wins but it takes intelligent optimisation to maximise reach) and the real challenge is with generic keyphrases.

However, it's a dangerous assumption to make that brand investment is not required because searchers will find organic listings for all brand keyphrases and click.

This is often not the case. You need to understand the implications of query type (exact vs. phrase vs. broad) and brand + generic keyphrases (where your brand name is found in a keyphrase with non-brand terms e.g. argos camping tents) on your SEO visibility. 

If you have a PPC brand campaign using broad match, it's possible that for some variations of the matched queries, your organic listing will not be in the top position. For SERPs with high paid search/inclusion competition, being outside the top two positions means being below the fold.

Paid search can be a sensible investment to plug these gaps and enhance your search presence. This blog is a walk through of how I approached a recent Client project to try and prove the value of continued investment in paid search brand campaigns.

Setting the challenge.

"Prove that I need to spend on brand keywords".

 There are several reasons why but proving them requires more than quoting research and 'best practice'. The main reasons I would suggest are:

  1. Some online searchers will click on paid search ads over organic listings. As the variety of paid search increases (e.g. image ads, email sign-up box, sitelinks), people are being conditioned to use sponsored ads.
  2. Protecting brand territory is important online. Paid search can plug gaps in SEO coverage and support your organic listings.
  3. Paid search is a great place to experiment with copy and calls to action, which can then be used to optimise webpages for SEO to target increased click through.
  4. Reinforcing your presence with paid search provides brand authority and can help increase the likelihood of a click on your organic listings.
  5. Google relaxed its T&Cs for bidding on competitor brand terms. In competitive markets, a lack of brand focus can mean conceding search real estate to your competitors when people most want to find you.

Now on to the proof. Below is a walk through of the test I set up to help prove what the real impact of brand investment was on overall paid search KPIs. I've split this blog into the stages of the test plan that I followed.

After reading, tell me if you think this was a good test or if I have missed something. There are always ways to improve the quality of testing.

Stage one: test objective

You should design the test to add value to the business, not just serve as a vanity project. Define exactly what you want to achieve and how you will use the learning from the test.

This really helps when a Director questions what you do with your time. If you can demonstrate that you're planning a test designed to increase ROI & minimise cost, it comes across more positively than saying "oh, messing around with some data".

In my example the objective is:

"To prove that investing in brand paid search delivers incremental sales revenue & identify keywords that can be removed to reduce cost without adversely affecting overall sales (including SEO & assisted conversions)." 

Stage two: test hypotheses

This outlines the reasons for doing the test and the criteria that you are trying to prove or disprove. These really help the other people involved to grasp what you're doing and help shape the test plan.

In my example the hypotheses are:

  • If we don't invest in brand paid search campaigns, we will reduce our overall number of transactions and website profitability.
  • The loss in sales by cutting brand investment will outweigh the reduced marketing cost.
  • Stopping brand campaigns will lead to an increase in visits and revenue from organic branded keyphrases.
  • Stopping brand campaigns will lead to a decrease in assisted conversions where a brand paid search keyword is involved.

The rationale is that research and best practice show a clear link between paid search and SEO and that brand coverage is important for search marketing but there is not data for my Client to prove this beyond doubt (GA account created the same time as the PPC project, so no historical data to compare).

Stage three: defining the test structure

Now this is the hard part. How do you create a test that covers the angles? I'm still not 100% sure mine does, even though I've pulled the methodology apart many times. But here goes, I'm opening myself up to tub thumping and cries of incredulity:)

The first thing I decided was that I didn't want to simply pause all brand ads – given the revenue contribution, that wouldn't go down well. Instead, I wanted to target a focused hitlist of keyphrases – the premise being that if these provided a significant result, the test could be expanded/refined.

 I worked out the following approach: 

  • Select keyphrases to include in the test based on past six months Google Analytics data (including some of the main traffic drivers).
  • Identify AdWords campaigns the keyphrases are featured in: exact, phrase & broad match.
  • Agree with PPC agency how to exclude these keyphrases from campaigns without affecting other keyphrases.
  • Benchmark data for last click sales from paid search and organic for these keyphrases.
  • Benchmark data for assisted conversions for these keyphrases.
  • Run the test over a full week period to include each day of the week and the full 24hrs for each day (taking into account any day-parting influence).
  • Define KPIs and export the data for the three months prior to the test, the week of the test + the following month (need to ensure data trends aren't seasonal/influenced by external factors, so post-test comparison is useful).

Nailing the brand keyphrase hit list

I decided the best way to decide was to review Google Analytics paid search data for the past six months and download the e-commerce data into Excel. From there I opted to sort by Per Visit Value as a first stab.

Why?

Well, visits isn't enough. What if I only picked keywords that drove traffic but low conversion? Wouldn't I be missing a trick?

But ignoring visits is also short-sighted. What if keywords with high visits and low conversion rate (remember this is last click) actually contribute a lot of assisted conversions? Or have a high average order value? Headache kicking in.

After deliberating the mystery of life on a mountaintop for several months, I decide to shortlist brand keywords that satisfied one or more of the following criteria:

  • High Per Visit Value – defined as over £1.00.
  • High level of visits – defined as over 1,000 per month.
  • High number of transactions – defined as over 100 per month.
  • High revenue contribution – defined as over £1,000 per month.
  • High average order value – defined as greater than site average (I perhaps could have chosen the aggregate search AOV as a comparison).

[Please note the thresholds above were set based on the values distributed in the data and these will vary for each website]

This resulted in a targeted list of approximately 20 keyphrases containing the brand keyword(s).

Why didn't I include every keyphrase?

  1. There were >2,500 individual keyphrases containing the brand term
  2. >80% of these had less than 10 visits per month
  3. The effort to mine data and do the comparison on this volume of keyphrases, given the minimal visits/conversions/revenue, would have made the test a bloody nightmare! (plus I'm a consultant so I weep at the prospect of real work).

Stage four: defining KPIs and benchmark data

This is the fun bit. You need to create a master data set from which you can do the analysis. I find Excel easiest as I can import source data, manipulate it, whack in some formulas and create a management sheet with some nice visualisation of the data.

Top line analysis for brand traffic

To ensure I didn't under-estimate the impact of pausing brand campaigns, I tracked the total numbers for all brand paid search visits. This ensured there was a snapshot of the long-tail as well as keyphrase level analysis. 

This also let me determine any substitution effect whereby pausing the most popular brand keyphrases had an uplift effect on other brand keyphrases.

I could see this by plotting total vs. test keyphrases on a line graph – zero effect would be shown as the two lines following an identical pattern.

Weekly KPI tracker

This was the meat and drink. I decided to export the key e-commerce variables from GA:

  • Visits
  • Transactions
  • Conversion rate
  • Revenue
  • Average order value
  • Per Visit Value.

I didn't just want to see the impact on visits, I also wanted to analyse the impact on the quality of visits. For example, did removing paid ads simply screen out the hottest prospects (which could be shown by a decrease in conversion rate, average order value, per visit value etc)?

To do this I did a dump of all keyphrase data (by week) and then used lookups to create a master view and charts to visualise the trends (see below). This was done for both paid and organic search (remember, we're evaluating the overall impact on search). 

Annotating GA reports

A late realisation was that marketing campaigns could bias the data. For example, if a full-run national press insert launched the week before the test, this would inflate brand search in that week. Any comparison would be inaccurate.

Luckily GA has a handy feature – annotations. I added notes for every major piece of brand marketing during the test period. This meant that when I looked at data timelines in GA, I could see where peaks followed marketing campaigns. In the data I flagged these weeks by shading the cell backgrounds (see below).

Stage five: validating the benchmark data

It's important to make sure the data is accurate before doing any analysis. Otherwise, you risk drawing erroneous conclusions.

This is a really simple, quick step. For each KPI, I cross-referenced key GA reports (e.g. Traffic Sources > Search > Organic) and sense checked the numbers showing in my lookup tables.

Any discrepancy led to scrutiny of the data export and formulas.

Stage six: confirm test plan with all parties involved

So, so important. There will be multiple parties who need consulting and their input considered. These typically include:

  1. Web manager. Needs to ensure the test is aligned with key trading activities and major campaigns (e.g. don't pause campaigns when there is a major offline press campaign that will drive online brand searches – throwing away money).
  2. PPC agency. They need to validate how to structure the test to ensure the relevant ads are paused and keywords excluded (e.g. for 'exact' match campaigns, are these using 'near exact match' so excluding a keyphrase might not 100% block the ads from appearing)
  3. Marketing team. You don't want to spoil sales from major campaigns by not telling them your paid search ads are going to be offline (=trading meeting stand-off!).

Get everyone in a room. If that's not practical, organise a conference call or Skype session. Run through the plan and how it is going to work. Get their feedback and if needed, update the plan (note: whoever manages your PPC should be involved from the start so they can support you effectively and understand the reasons for the test).

Don't start the test until all parties have committed and the roles & responsibilities + timelines are clear. If you ignore this, you risk compromising the test and invalidating the data.

For example, one test I was involved in didn't work because for some phrase match campaign negative keywords weren't added, so whilst the keyword didn't appear on exact match, it came up in phrase match searches.

Stage seven: running the test

This was the easy part. I called up the PPC agency, shared the data, ran through the test plan for a final time and agreed a date & time to put the relevant changes in place in Adwords. I made sure my Client approved this in writing by email.

We decided on midnight as the start time so we'd have a clean 24hr period for each day of the test.

The first thing I did the morning of the first day was to run searches for each keyphrase in Google to ensure the ads weren't showing as planned. Luckily the set-up worked and no ads appeared, so the test was validated.

I sat back, congratulated myself on a job well done. Then realised I'd done the easy part. I still had to analyse and interpret the data.

Stage eight: evaluation, analysis and outcomes

Stars In Their Eyes moment – does the data support the hypotheses?

I followed a series of logical questions to reach conclusions:

  • Are there clear trends in the data?
  • Are these trends consistent across each keyphrase?
  • Is there a clear correlation between pausing brand bidding and organic brand traffic/transactions?
  • What is the total impact of pausing the ads (loss of last click revenue + loss of assisted conversion revenue)?
  • Is this loss offset by the increase in revenue from organic search (increase in last click revenue + increase in assisted conversion revenue)?
  • Do the top-level numbers rise or fall when brand campaigns are paused?
  • What external factors (e.g. marketing campaigns) are influencing the data?
  • What data do we need to discount because it has been compromised?

To help factor in the influence of marketing campaigns, I plotted the test data against direct traffic.

Why?

Direct traffic is a good indicator of brand visits generated from marketing campaigns (online and offline).  If Direct visits are increasing, this can be attributed to external factors. Therefore, it's important to plot search against direct to compare the data trends around the test period.

What did I learn?

First, I proved my key hypothesis – investment in brand paid search delivers incremental revenue. As soon as we paused the ads, the total revenue from search dropped and wasn't fully compensated via other channels (even when we factored in the external influence of marketing campaigns).

There were other interesting observations:

  • Decrease in visits and transactions from organic brand search (I was surprised as had expected a slight increase, for obvious reasons)
  • Average order value fell (could this be down to losing sitelinks in ads? Unlikely to be seasonality as other channels didn't have a similar dip).

[Please note that these findings shouldn't be taken as gospel for all websites. The impact may vary depending on your audience and the markets in which you operate. However, the findings do broadly support best practice thinking for search engine marketing.]   

How do you approach paid search testing?

Have you run a similar test? If so, what did you learn?

Do you think I'm talking sense or need a sharp drink?

I appreciate that this is a long post but hopefully it has helped demonstrate the detail that is required to effectively plan a test. Success in e-commerce often comes down to the quality and detail of planning.

Please drop by with comments and opinions based on your experience. If you have other examples of useful tests and/or links to relevant blogs, please let me know and share the links.

No hay comentarios:

Publicar un comentario