Myths create misunderstanding among practitioners and confuse beginners. Some are worse than others.
With the help of top experts in conversion optimization, we’ve compiled an updated list of CRO myths that just won’t go away.
1. “Just follow best practices.”
This may be the most pervasive myth in conversion optimization. It’s too easy (and effective) for a blogger to write a post of “101 Conversion Optimization Tips” or “150 A/B Test Ideas that Always Work.”
These articles make it seem like conversion optimization is a checklist, one you can run down, try everything, and get massive uplifts. Totally wrong.
Let’s say you have a list of 100 “proven” tactics (by “science” or “psychology,” even). Where should you start? Would you implement them all at once? Your site would look like a Christmas tree. (Even if you tried to test the 100 tactics one by one, the average A/B test takes about four weeks to run, so it’d take you 7.5 years to run them all.)
Some changes will work. Some will cancel each other out. Some will make your site worse. Without a rigorous, repeatable process, you’ll have no idea what had an impact, so you’ll miss out on a critical part of optimization: learning.
2. “Only three in 10 tests is a winner, and that’s okay.”
You would also have a good indication of where the problems are because you did the research—the heavy lifting.
3. “Split testing is conversion rate optimization.”
But optimization is really about validated learning. You’re essentially balancing an exploration–exploitation problem as you seek the optimal path to profit growth.
4. “We tried CRO for a few weeks. It doesn’t work.”
Often, companies will throw in the towel if results don’t appear immediately.
Tony Grant offered a hypothetical client quote that all too many optimizers are likely familiar with: “I tested my CTA colour once and saw absolutely no increase in sales. A/B testing isn’t working for us.”
5. “Testing can validate opinions.”
Yet, too often, optimization is (mis)used to validate those biased opinions.
Similarly, we’re tempted to tell post-test stories. When we find a winner, we use storytelling to validate our opinions.
6. “Just do what your competitors are doing.”
Most case studies don’t supply full numbers, so there’s no way of analyzing the statistical rigor of the test. Plenty are littered with low sample sizes and false positives—one reason why most CRO case studies are BS.
Also, if you’re spending time copying competitors or reviewing shady case studies, you’re not spending time on validated learning, exploration, or customer understanding.
7. “Your tool will tell you when to stop a test.”
Tools are getting more robust and offering a more intuitive understanding of how to run a test. But you still have to learn basic statistics.
Statistical knowledge allows you to avoid Type I and Type II errors (false positives and false negatives) and to reduce imaginary lifts. There are some heuristics to follow when running tests:
- Test for full weeks.
- Test for two business cycles.
- Set a fixed horizon and sample size for your test before you run it. (Use a calculator before you start the test.)
- Keep in mind confounding variables and external factors (e.g., holidays).
- You can’t “spot a trend”—regression to the mean will occur. Wait until the test ends to call it.
Still, popular benchmarks—like “test for two weeks”—have caveats.
8. “You can run experiments without a developer.”
Your testing tool’s visual editor can do it all, right? Nope.
9. “Test results = long-term sales.”
Not every winning test will prove, in the long run, a winning implementation. Too often, as Fiona De Brabanter lamented, tests return a:
Ridiculously high increase in conversion rates but not the actual sales to show for it.
So why the imaginary lifts? Often, it results from stopping tests too soon.
Other factors can create a gap between test results and post-implementation performance.
Ideally, you can close the gap by accurately calculating a net value for your testing program.
10. “Test only one thing at a time.”
As Laja argues, the “test only one thing at a time” argument is
scientifically good, practically not good advice for the majority. While stats puritans will find issues with this, I’m in the “we’re in the business of making money, not science” camp myself.
Indeed, the “one thing at a time” logic can be taken to the extreme.
At its heart, optimization is about balance.
For the integrity of an industry, it’s important to know the destructive myths and mistruths. That way, those beginning to learn will have a clearer path, and businesses new to optimization won’t get discouraged with disappointing (i.e. non-existent) results.
There are many more myths that we probably missed. What are some destructive ones that you’ve come across?
An earlier version of this article, by Alex Birkett, appeared in 2015.