Is it okay to be A/B testing more than one Landing Page element at a time?
I was exhausted and annoyed, but more than anything I was embarrassed.
It took me months to realize I had become the latest victim of a conversion optimization myth that still persists today.
“Don’t A/B test more than one element at a time,” I read in popular marketing blog posts written by “experts.”
Not only did I practice it, but worse, I disseminated it as fact.
I learned the hard way that I had jumped into A/B testing prematurely.
It cost me precious time and chunks of my client’s budget, and the worst part was I had nothing to show for it.
I hadn’t produced a sustained conversion rate lift.
There was no 300-million-dollar button.
Hell, there wasn’t even a 20-dollar button.
Here’s what you need to know to avoid falling into the same trap I did.
The A/B testing misunderstanding that leads beginners astray
I thought A/B split testing was fairly one-dimensional when I started.
You test your original page (your control) vs. another version (your variation) with only one difference.
After driving equal traffic to both, the one with the highest conversion rate is the winner.
“With only one difference” was the phrase that stuck with me.
It made sense if I wanted to find a better headline, I would create a variation of my control page with a different headline while keeping everything else the same.
Then, at the end of the test, I would know which headline was better because it was the only change I made between the two pages.
Afterward, I’d move onto the next element, like the form - and after that, the next.
The process would look something like this:
The A/B testing case studies I read prescribed this method.
They were all headline vs. headline, button vs. button, or image vs. image, for example.
And so, I followed them — that is, until I learned about something called the “global maximum” and the “local maximum.”
The problems with A/B testing this way
To be clear, there’s nothing patently wrong about that method — changing only one element per test.
You can do it, but a few problems arise when you do.
It can take time and resources to conclude your A/B test
To be confident about the results of your test, you have to run it until you reach statistical significance.
For sites with a steady stream of traffic, that should take at around 2-4 weeks, according to Peep Laja (to figure out how long you’ll need to run yours based on traffic, use this calculator from VWO).
For most websites, it takes even longer.
And if you don’t have the resources, testing one element every month or more is an impractical way to find the ingredients of a high-converting page.
It’s also an impractical way to make site-wide optimizations that can require the testing of hundreds of pages, which each include countless elements.
Small tweaks don’t normally bring big returns
Expedia once generated $12 million by removing one field from a checkout form:
But in most cases, a subtle change to your page won’t yield a big payout like this.
If you’re not a business like Expedia that generates billions in transactions every year, a small percentage increase in conversion rate won’t bring a big return.
A/B testing this way assumes you were on the right track to begin with
Another big problem with this method of testing is that it assumes you were on the right track to begin with.
This is where global and local maximums come into play.
Let’s use an example.
Imagine you’ve created an ebook landing page that features a layout you presume your audience will love.
You generate traffic to it and after a month, your page is converting at a modest 4%.
Unsatisfied, you start to test.
First it’s headline vs. headline, next it’s a 3-field form vs. a 4-field form, then you try a red button vs. an orange one.
Months go by and eventually, you run out of things to test.
You’ve reached what’s called the “local maximum.”
What is the local maximum?
In landing page A/B testing terms, the local maximum is the best version of your current page.
Optimizing beyond this would likely bring diminishing returns.
Now that you’ve tested numerous headlines, forms, images, and buttons (there are plenty of other things to test, mind you), you’ve created the page most capable of converting your visitors, right?
The global maximum and the stone house
Two builders were tasked with constructing a house for a nobleman.
“Whichever of you builds the most worthy home will be rewarded with wealth beyond imagination,” he told the pair.
Eagerly, the men got right to work and after six long months of arduous labor, they summoned the nobleman to evaluate the two homes.
“This is the most beautiful wooden house I’ve ever seen,” he said of the first builder’s construction.
Indeed, no one had ever seen wood manipulated in such a way.
Even the second builder was awestruck at its ornately detailed design.
As the nobleman stood in front of the second house, it was clear he was less impressed with the modest stone structure.
“Obviously your rival is a much better designer,” he said, “though, this home does possess some worthy qualities.
I’ll make my decision tomorrow.”
The nobleman returned to his home to ponder, and before he retired that night he had made a decision.
While he slept, though, a powerful storm raged, and when he returned to site of the two homes the next day, he was disappointed to find the beautiful wooden house in pieces. Next to it, the second builder’s stone structure stood tall.
“You built the best wooden house I’ve ever seen,” said the nobleman to the first builder, “but evidently you did not build the best house.”
Builder number one had found the “local maximum.” It was as good as a wooden house could get.
But while he focused on beautifying the features of the home, he failed to consider that there was a better way to build it.
That’s what happens when you start with only one page and A/B test to optimize one element at a time.
You can improve your page to a point, but while doing so you may completely miss the global maximum - the better design.
Here’s an image to help conceptualize:
Maybe you’re so focused on getting your users to download your ebook that you hadn’t considered they’d rather claim a template or a tool.
Or, maybe your page’s layout is confusing to your visitors.
Maybe the form, CTA button, hero image and copy are all in the wrong place.
Are you going to test a page with the image at the top, then aligned right, then left, and then bottom?
And then same for the CTA button, copy, and form?
I’d hope not.
You can adjust small features like headline or button color, or even typeface if you want to.
But if all the content on that page is content that doesn’t resonate with your audience, then it will never achieve the goal of producing the most conversions possible.
The true strength of A/B testing
In most cases, the true strength of A/B testing is its ability to identify the page closest to the global maximum.
By testing drastically different designs, you’ll be able to find which page your visitors prefer, and then optimize from there with multivariate testing.
A few case studies to illustrate:
A/B cluster test case studies
1. Brookdale Living
The variation on the right ultimately boosted conversions by 3.92%, which translated to an extra $106,000 in monthly revenue.
Nitin Deshdeep explains:
The variation with image worked exceedingly well. The page offered much greater value to the visitors through its newly added features — testimonials, USPs and credibility logos — and pushed them for a conversion.
Now imagine if the testers had focused only on optimizing the elements of the original page by testing button colors and headlines.
They would never have reached the result they did with the drastically different third page.
The same goes for the next example.
The image above was a landing page for a free trial of Investopedia Advisor, a source of monthly “market beating stock tips.” It converted at 1.33%.
When the team at MarketingExperiments was tasked with optimizing it, they hypothesized it could use multiple changes.
This one boosted conversion rate by over 89%.
A/B testing isn’t just for page elements like headlines and buttons.
It can also be used to test things more focal to the success of your business, like the model itself.
Optimizely’s Cara Harshman explains a problem that confronted brain-training software company, Lumosity:
Lumosity’s scientists recommended that users train for 15 to 20 minutes a day, 4 to 5 times per week—not unlike physical exercise—although the site didn’t actually constrain users to a specific time limit. The data showed that people would stay logged in for many hours, but that over time, the frequency of logins declined, suggesting users were burning out.
Their team took a risky optimization approach: They hypothesized that limiting individual user sessions would boost engagement.
And so, they made multiple changes to the user interface, then tested the old unlimited version of the training…
Against the new limited version:
The results were astonishing to the team, people actually trained more as a result of being limited.
But how will you know what brought your lift?
At this point you may be wondering, “But if I test multiple page elements at a time, how will I know what produced the lift?”
I’d like to counter with: Do you care?
If the new version of whatever you’re testing performs better than the old one, does it matter?
A/B test to get as close as you can to the global maximum, and then, if you’re interested in learning how different elements interact with each other, use multivariate testing.
It’s really not as hard as people make it out to be!