So our test has been up and running. Previously I mentioned an issue with numbers; partly because of an additional variation thinning out traffic, but also picking a page that wasn’t getting enough traffic.
After waiting an additional week we have some results. So of…
Optimizely our preferred tool (assume others have similar feature) based on the data entered let us know if we have hit significance (NB always run tests to your business cycle as well as significance). Variation 2 hit that; however variation 1 didn’t. That said results speak for themselves.
Tests are about learning not just optimising and that was the case for Test 1. Both variations had a negative improvement. Was this a case that the hypothesis wasn’t valid? Or was the result swayed by issues mentioned above? Time to iterate.
We have a now picked a guidance page with a much higher footfall. We have also only created one variation. Our hypothesis remains:
if the recommendations link is highlighted in the menu navigation, we can increase the progression rate to recommendations as users will readily identify recommendations
This time the one variant contained an amended order.
Significance was met very quickly (days) however we ran the test for just over two weeks as per business cycle. Results spoke for themselves, but this time a very positive improvement.
Conversion rate was increased from 45.92% to 57.27% a 24.7% improvement.
test, learn, iterate
We, NICE, are now testing regularly (last test +30.8% increase in conversions) and in a position to keep iterating. Of course we are always learning!