Conversion Rate Optimisation mistakes

01
Read time: 5 mins

There are a few common mistakes that happen when split testing and optimising. Some of these mistakes can directly impact your campaigns from PPC to general e-commerce and yet when avoided can yield improvements fairly quickly.

Getting straight to the point some of these mistakes are:

  1. Not letting a test run for long enough.
  2. Testing too many small elements.
  3. Just testing random things.
  4. False positives.
  5. Not knowing when to quit.
  6. Failing to optimise for each traffic source.
  7. Only focusing on conversion rate.
  8. Treating low traffic websites all the same.

Breaking down each point –

  1. Not letting a test run for long enough.

If tests are not running long enough, the chances are you will not have accurate enough data and a lack of information required about your visitors. The aim is to consistently achieve a large enough sample size to either reinforce or negate your original hypothesis. I have found that running a test for at least a week to uncover data such as unique visitors, conversions and statistical data to make data driven decisions.

In terms of seasonal and holiday campaign testing, it would be wise to run your testing in comparison to the previous years holiday. For example, you may have tested a landing page last Christmas for a few weeks and generate significant results, I would be best to run a test the following year to see if you can improve year on year.

 

  1. Testing too many small elements.

Testing too many small elements is the one mistake in which I found that lots of people make. Changing colours of buttons on forms or small pieces of text here and there won’t actually make much of a difference or offer any new learning opportunities.

Small incremental changes more often than not, lead to small incremental results. You may achieve a 5%, 10% or even 10% increasing in whatever metric you’re looking to improve on, however in reality, you just got lucky.

The recommendation would be, to test for a big impact first, look to make a significant difference. Testing should be an opportunity to draw behavioural insights on your visitors or customers. Whether they prefer a red button to a blue button will not give you any knowledge in terms of what content to create, if a specific layout works been with a clearer message. Big changes working in tandem with a suitable hypothesis will give you the insight you need, then proceed to make incremental changes.

 

  1. Just testing random things.

Figure out why something needs to be tested, create a hypothesis, don’t guess. Three golden rules when testing to achieve significant results. I would say the most glaring is not testing without a clear and supported hypothesis. Figure out ‘why’ something needs to be tested and the expected outcome.

Allow the tests to build on one another, pay attention to the data which should give you the reasons on what to test.

 

       4. False positives.

False positives are why you think you have a winning variant but in reality, you don’t really. That’s not to put a dampener on your efforts but try not to be sold by quick success. The more changes to the page you make, the more an A/B test become an A/B/C/D/E/F test. This will increase the chances of a false positive. If you think you may be getting a false positive in your testing, try testing it against itself – use backup tracking – pass the data through Google Analytics then re-test.

 

       5. Not knowing when to stop

Let’s say you’re testing a variant of a page and you were confident in the beginning that the new variant is going to bring in improved results, but unfortunately it doesn’t. If there is not a large enough improvement the chances are it’s not going to happen. A good stopping rule is, at least 1000 unique visitors on each variation before looking at the data, then check your conversions or gaols.

The law of diminishing returns would be around 4 weeks, anything after that would be a waste of your time, effort and the client’s money.

 

       6. Failing to optimise for each traffic source.

Just as an example, Facebook will convert differently to AdWords. Content converts differently to display. You see where I’m going? I can’t understate that using specific landing pages for each source will give your tests much greater results than using the same landing page for each. Prioritise high converting traffic sources first.

 

       7. Only putting focus on conversion rate.

Conversion Rate is a relative metric. What I mean but this is that in the long run, it could make you less money than you think. Look at the bigger picture – KPI’s such as cost per acquisition, average order value tends to offer far more value to a business the conversion rate alone. For example, if you’re testing conversion rate only, but you find that your customers and not buys your cross sells. So yes, your conversion rate may go up but then that would mean your order value goes down, hence you would lose money.

 

       8. Treating low traffic websites, the same high traffic ones.

If your website has a low volume of traffic / unique visitors, optimising your website for conversions would not be a good use of your time. As it would take an awful long time to see meaningful results, it would be wise to focus on traffic acquisition strategies (which I can help with) before Conversion Rate Optimisation (CRO).

 

There would be nothing worse than spending time creating a beautiful landing page variant to A/B test only to realise that it’s not supporting your hypothesis. It’s easy to get caught up in all the excitement of being able to increase revenue and conversions for your brand or a client through CRO. However, if time is precious to you then avoiding these mistakes will serve of great value to you.