If you want to double or triple your conversion rates over the next 9 months, A/B testing is the foolproof way to do it.
Any funnel, any business model, any marketing channel.
You could easily double your customer counts within the next year.
All without having to increase your marketing spend or get more traffic.
That’s the magic of A/B testing.
There is one catch though.
A/B testing is easy to skew up. It’s counter-intuitive and goes against many of our business instincts. Even worse, it only takes one bad decision to ruin all the progress from an entire testing program.
Over decades of A/B testing ourselves, we’ve put together a set of rules that our teams always follow. If you follow these rules too, you’ll avoid the bad calls. Then it’s only a matter of time before you double your business.
#1: Allow your test to run for at least 7 days
The first is to allow your test to run for at least seven days.
The reason is that A/B tests can change very quickly. One variation may jump out to an early 350% conversion boost by day two and even be ruled statistically significant by your A/B testing software only to cool down to a 15% boost by day five. To account for these changes, you need to make sure and let your test run for at least seven days.
We’ve seen countless test flipflop over the years.
They start out as winners and then end up as losers after a few days.
The first week is especially volatile. Try not to even look at the results during that week.
Another reason to test for a longer period of time is that website traffic varies from day to day. Saturday traffic, for example, can be very different from Monday traffic. Based on that, you want to make sure to get results from every day of the week before calling a winner.
You should also keep in mind that even seven days is really a short time period for an A/B test, and you may be better off letting it run for several weeks. You’re looking for a winner that will get long-term results and don’t want to pick a winning variation too soon only to find out it doesn’t actually boost conversions or revenue.
It’s also a good idea to allow tests to run until you have at least 100 total conversions. More than that is even better and less can work, but running until there are at least 100 conversions will help to give you more confidence that the outcome is accurate and will deliver the results you’re looking for.
#2: Run tests until you have a 95% confidence level
The next rule to follow is to run your test until there’s at least a 95% confidence level for the winning variation.
The reasons for this rule are the same as those for rule number one. First and foremost, you’re looking to pick a winning variation that will give you better results for the long term. This means you want to make sure the results are statistically significant and that you don’t pick a winner prematurely.
Another reason is that test results can change dramatically over the course of an A/B testing period. I’ve personally seen a variation jump out to a 105% boost in conversions after a day and a half only to lose when the test is called 10 days later. This makes it even more important to wait until your A/B testing software says the results are statistically significant.
To get a better idea about how long this will take for your test, use this simple A/B Test Calculator from Neil Patel:
So keep your test running until you hit 95% statistical significance on the calculator.
You’ll also want to keep in mind that the smaller the conversion boost, the longer the test will need to run, and vice versa. As such, if the improvement is only 5%, then you’ll need to run the test much longer than if it’s a 50% improvement.
#3: Big changes lead to bigger results
Another rule of thumb to keep in mind is that bigger changes have a greater chance of leading to bigger results.
If you change the button copy on your homepage, for example, you might only improve conversions by 5%. – Read more