What is A/B or Split Testing?
A/B testing or "Split Testing" is the process of testing different versions of a webpage in order to see which page produces more conversions. When running a test, traffic is split between the tested pages. Whichever page yields more conversions is the winner. The A/B or split testing process is a five-step procedure that consists of:
- Data collection
- Creating a hypothesis
- Testing the hypothesis
- Implementing changes if the results are conclusive
- Repeating the process
Why Is A/B Testing Important?
A/B testing allows you to test a wide variety of different elements to find the highest converting variations. If you're spending a lot of time, money, and effort into gaining traffic to your website, A/B testing ensures that you're using the best performing web elements on your webpage to drive them to becoming conversions. Using conversion analytics, you can begin to gather data about your webpages to identify conversion blockers in your conversion funnel. After you've located these conversion blockers, you can A/B test your hypothesis to see if it is correct and implement the results.
Within any online business, there is no one-size-fits-all solution. You have to continually gather data and run your A/B tests because there is always room for improvement as virtually any and every element on your website can be tested.
How to Run a Five-Step A/B Test
Collecting data about your website is the first step to creating any hypothesis. This type of data can come from Google Analytics or a Conversion analytics tool such as Unamo CRO. The data doesn't always have to be numerical; it can be about who is using or site, from where, with what device, etc. You can acquire important data from any of the following sources:
- Bounce Rate - Why are visitors spending time on some pages more than others? What is it about certain pages that they are not finding value in?
- Traffic - Where is most of your traffic coming from? Are your marketing campaigns driving traffic to your site from the right sources?
- Demographics - What types of users are on your site? Are you reaching your target audience? In your target location?
- User feedback - What do users say about your website? From their feedback, what can you do to improve your website?
- Heatmaps - What elements are users interacting with on your website? Are they clicking your calls-to-action? Are they scrolling down far enough on your page to find other valuable CTA's or content?
- Session Replay - What about your users' behavior fits in with your goals? Are they using your features/website correctly?
- Segments - Are you segmenting your users correctly? How does one segmented group behave and click differently from another when they have the same call-to-action? Again, is the right audience finding or using your website?
- Form Analytics - How do users interact with forms on your website? Are they clear about what information they want? Is the information you're asking for relevant to their purchase or subscription?
All of these data sources provide metrics that can be used to uncover some of the problems listed above on your website and more. Your focus should be on creating a plan to uncover why these metrics are this way instead of just implementing changes immediately based on numbers.
Data Collection Example:
You're getting a lot of traffic to your main landing page where the call-to-action is a free trial. However, you can see that a lot of clicks are being directed to other links on that page. What do you do? Do you change the copy on your call-to-action? Change the color? Location?
The possible solutions are many, which is why it's important to gather data and implement a plan not only to fix the problem, but uncover why so that your future page optimization efforts can benefit from your previous mistakes.
Forming a Hypothesis
After you have compiled data that leads you to believe you have some problem areas on a page or pages, you need to create a hypothesis of why this is. A hypothesis should be formed as a statement and not a question.
The formula usually looks like:
- If I do ______, then _____ will happen
- Ex: If I change my CTA button color to yellow, my conversions will increase.
Your hypothesis is an educated guess based on the data you have gathered about what you think will happen after you have tested your prediction.
You should also keep external variables in mind when creating your hypothesis. Is it affected by user segments? Seasonal or holiday surges in traffic? Test results for any hypothesis can vary greatly if certain variables are not accounted for.
Depending on the page, the conversion barriers may be vastly different. A high bounce rate on your homepage, a low percentage of form completions, or a lack of interaction with your CTA's on a certain page, etc., can all have their own numerous reasons as to why they're not converting. Account for the variables within each given page and problem and base your hypothesis and tests on data that is unaffected by these variables.
Testing Your Hypothesis:
After you have created a hypothesis, it's time to test it. However, there are a few guidelines to keep in mind when beginning to test your hypothesis:
- One Element At a Time - If you test two different elements at the same time, such as replacing an image and the copy of a call-to-action, one test may have a higher conversion rate than the other and you won't know if the copy or the image made a difference.
- Measurable Tests - Your findings should be backed by quantifiable data that you can measure either by clicks, conversions, page scroll depth, bounce rate, etc.
- Consistent Variables - Make sure constant variables for each test version that you run are the same. Variables such as time or number of sessions for each test and the same user segments for each test have to be the same for the results to be accurate. Make sure the sample sizes are large enough and equal to get the most accurate data possible.
- Give Your Tests Time - Just because one test has a wide margin of success over the other test doesn't mean it's your clear winner. Let the tests run until the time or session limit before you implement any changes.
When you have checked off that all of the above mentioned guidelines are in place, you're ready to run your tests.
Implementing Conclusive Results
One of the most important rules for A/B testing is to only implement changes from conclusive test results. A common myth of Conversion Rate Optimization is that every test will yield results that you can implement immediately. This is not true. Sometimes you have to run many tests on the same issue to find results that are conclusive.
If you do believe that the test results you've received are strongly conclusive, meaning they are definitively better than your original data back in step 1, go back to your data collection metrics and test it again against your original statistics. If the metrics stay true over multiple tests, then you know you have a thorough winner.
Repeat Your Findings
Within a competitive online market, you should constantly be running tests on your website to find the best method or conversion funnel that leads to the most conversions. As stated before, depending on your website and goals, each test may be different; however, keeping in line with the guidelines listed above will ensure that your tests reflect accurate data and lead to an increased understanding of user behavior and interaction on your website.
For instance, a headline you A/B tested had a clear winner. Great! Now you can test the font, case style, length, punctuation, etc. With conversion rate optimization, there are many different variables you can test even for just one element.
How Can I A/B Test My Webpages?
To A/B test your webpages you'll need to use either a landing page tool or test one version for a specific amount of sessions before testing the variable for the same amount of time after the first test's session limit has been reached. Using a landing page tool will save you time and have metrics that will make designing elements on your page easier. If you're short on funds to use a landing page tool, test the elements one at a time over the course of a sessions limit and switch to the alternate version after the first limits have expired.