A/B Testing: A Marketer's Nightmare or the Key to Success?
Most marketers have heard about A/B testing, even if they haven’t regularly used it themselves. At the very least, the phrase “A/B testing” is a buzzword that is often bounced around in conversations and floated out in boardrooms.
But without a full grasp on what constitutes effective A/B testing, and how to perform it successfully, this very important tool can fall flat.
Getting Top-Notch Results
First, let's consider all the good that proper A/B testing can produce.
- Reduced bounce rates
- Better conversion rates
- Increased, higher-quality content engagement
- More revenue!
Furthermore, companies that regularly engage in A/B tests often stimulate a productive, collaborative experimentation culture in which seemingly great ideas are neither blindly followed nor quickly dismissed—but rather tested for merit.
The Fairytale Perception
It's no wonder that "A/B testing" is such a buzz word. With these possible results dangling on a string in front of marketers, it's tempting for us to reach out and try and grab them. After all, when was the last time your boss told you, "Our marketing campaign results are too good, we need to reduce their effectiveness..."? (Haha--I know, I laughed at the thought of that too!)
More often than not, we are being challenged to make our campaigns even more effective and engaging. So, why not leverage A/B testing to see what tactics and messaging work better? How hard can it be?
A/B testing is very effective, but it is a lot of work. You can't just kiss a frog and, poof!, it turns into your prince.
I often see marketers fall into this common trap when creating and running A/B tests.
Just because you see an increase of 50 people engage with your email send, does not necessarily mean the results are good and worth implementing in more campaigns. Consider the following:
- How big was your total audience?
- Are we talking about an engagement increase of less than 1% when compared to your original version?
- Even if the results were notable, maybe they were a fluke.
- Did you properly segment and randomly target your audience?
Don't get me wrong, I am absolutely in favor of A/B testing. The positive ramifications of incorporating proper A/B testing are many. But as you can see, “proper” is an important part of this. With too little information or not enough planning, the actual A/B test and anything you decide to implement as a result can be a waste of time and resources.
Tips for Success
You may be asking yourself, "Ok, how do I ensure that the time and resources I apply to A/B testing will give me those coveted results?"
Here are six tips that will enable better understanding of this valuable tool and how to implement it to help your business meet its most pressing goals.
Tip #1: Understand What A/B Testing Actually Is (and just as important, what it is not...)
What is A/B testing? It’s defined as “comparing two versions of something to see which one yields better results.”
Remember back to those days when you had to bring something to the school bake sale? If you were like me, you baked two different kinds of cookies, and brought them both to school. Then, you compared the total sales of each flavor to determine which one was more popular. Congratulations—you just conducted an A/B test (at least in concept)!
Fast-forward and now you have a “real” marketing job. You can still use that experimentation tactic similarly to how you did when you were younger. Remember you should only have two versions in your test. If you have more, than that is called multi-variant testing—and that is a whole different beast.
Tip #2: Only Change One Thing in Your Test
You should only test one element at a time. I have seen many A/B tests of email sends, where the marketer had two versions, but in the alternate version, they changed: the headline image, the subject line, colors in the email, and call-to-action (CTA) button text.
The problem with this is if your alternate version is more successful than your original, it's hard to know why it was successful. Was it the subject line? Maybe it was the new CTA button text... another possibility is that all the changes were good, but the change of colors negatively impacted results slightly. So you can see how more than one element being tested can get confusing really quickly.
Tip #3: Get Strategic When You Plan Your Test
The thought of A/B testing can be overwhelming, especially if you think of the litany of aspects you can test, ranging from emails to web pages to online ads and beyond.
To get the most bang for your buck (especially if you don't have a lot of resources), try and align your test with something that the organization will immediately feel the benefits from. For example, if your company recently launched a product within a newer market, you may want to test a campaign's messaging to better understand what resonates with those people.
Kudos if you can also align your test results with actual revenue your test influenced.
Tip #4: Create a Hypothesis
What do you anticipate the results of your test to yield? After all, this is an experiment--so channel your science project skills from when you were younger.
Without a hypothesis, how will you know if your test was successful when you measure the results?
This could be something like: “We expect that using a clear Call-to-Action (CTA) button, instead of a text link, will increase the number of people who click the link in the email.”
Tip #5: Don't Forget About Statistics
This is where most people fall short in their A/B testing.
Whenever you deal with data, you need to apply certain rules and parameters to obtain relevant results. That is how statistics fits into A/B testing. Did you know that you need have a minimum amount of people in your test? This is a concept called 'Sample Size'. If you don't build this concept (and others) into your test, then you run the risk of getting false positive results.
Tip #6: Analyze the Results
Just because one version has better results than the other, doesn't mean that it truly is the winner.
Sometimes, results are conclusive (meaning they are statistically significant) but other times they’re inconclusive (meaning there is no clear winner). This is all determined by the rules of statistics. Inconclusive results are inevitable (even with the best planned tests). You will need to decide if the test is worth running again or if you want to cut your losses and test something else.
If you’re not sure how to calculate the conclusiveness of your results, some software has a built-in calculator that tells you whether or not they’re statistically significant. And if not, there are free tools to help you do this too! Just do an online search for 'A/B test calculator' or a similar phrase.
The bottom line is: running a proper A/B test can give you a lot of insight into what your audience is seeking, and how to increase your conversions and use data to make smarter company decisions. But you have to invest in pre-planning, and then take the time to analyze your results—and act on them.
When all these steps are taken, the power of A/B testing can be dramatic.