
Creative testing is one of those tasks that not all advertising teams are good at. After opening up profitable channels, testing campaign configurations and optimizing for different events all while keeping up with new industry best practices, who could blame you for not finding the time to establish an A/B testing process.
If you find A/B testing creatives confusing, you’re in the right place. After this introduction you’ll be well on your way to helping your team develop your own ad testing practices. In no time you’ll be able to use this process to consistently extract creative learnings from campaigns and achieve higher ROI.
The process of creative testing works by showing a set of ads to the exact same audiences in order to evaluate which ad performs best. These tests allow advertisers to improve the performance of campaigns and play a key part of a data-based creative strategy. As a result, the teams who use these insights to influence creative direction are much more liekly to create “winners” with their new ad concepts than teams who rely solely on intuition.
Advertisers in both agency and in-house teams commonly use creative testing to find out which type of ads perform best with their target audience. Creative testing can also be used to test different versions of a single ad concept. Through an A/B test, an ad that may have been rejected due to low performance can be improved after a few tweaks.
When executed well, the results from these tests provide accurate and timely insight to guide the day-to-day decisions of brand managers and graphic designers alike.
Advertising agencies are often responsible for creating ads for clients. These ads are usually created using a combination of research and creativity. Part of that research involves looking at what’s worked and not worked in the past.
Creative testing provides learnings to creative teams and uses specialized techniques to ensure that they can be relied on. With A/B Testing:
The value of a high-paced creative testing cycle is clear. With more teams adopting these practices, it’s about time you learn how to conduct these tests on your own campaign.
We’ll soon be diving into the phases of a creative test, but don’t skip this next section. It’s just as important! Whether you plan on sharing your results with your marketing director or your team’s designers make sure everyone is on the same page about the 3 following aspects of the test.
Defining your audiences is the key to a successful creative testing campaign. That’s because independent of the approach, a creative test can only provide insight on the specific audience which received the test. You might say this is pretty obvious, but it’s also very easy to make the mistake of overgeneralizing learnings.
Before starting your creative testing, define exactly who it is you want to better understand. Perhaps your company has multiple customer personas. In that case pick one. If your company is focusing on developing a new market, customers in this new geo could be your audience.
Specifying your audience ahead of time will ensure your learnings are valuable to your team and relevant to your company’s goals. If at this point you’re thinking that you might actually have to run multiple tests to get of all the insights you need, you are absolutely correct.
After deciding on your audience, choose the element you want to test. You might want to simply test your most recent ad with your current or previous top performer. While these are important tests, results of this kind of test are usually difficult to interpret.
Think about this; after your A/B test finishes you’ll have a winning ad concept. However you won’t know why it worked.
There’s no point in A/B testing new ads if you’re not able to explain the results.
In a few paragraphs, we’ll explain how to produce a test hypothesis to help you pick the element being tested, but for now note that you should try to pick a single element to test.
You’ve chosen your audience and picked ads that will help you better understand how this audience reacts to different messages or visuals. Now, on what basis will you judge if an ad was better? Lower CPM? Higher CTR? Higher conversion rate?
This aspect needs to be decided because a set of ads being tested might contain ads that each perform better on different metrics. By picking one metric, you’ll avoid the confusion of trying to pick a winning ad. In order to do this effectively, you need to think about your campaign goals.
For example, if you’re measuring clicks on landing pages, take some time to think whether the objective of your A/B test is to reduce your cost-per-click or whether it’s actually to increase conversion rate of users that are on the landing page.
Quick tip: Keep in mind that while picking lower funnel metrics might reveal more important learnings, the further down the funnel your metric, the more you’ll need to spend to achieve test results.
At this point, you know what audience to analyze, what aspect of your ad to test and on what metric you’ll measure performance!
The pre-testing phase is done before any creative work begins. As a marketer, take some time to review all of the ads at your disposal and analyze the choice of colors, layouts and images that have already been tested.
It’s important to go through this process because this context will provide you with ideas for useful ad elements to test.
Choose Your Objective
This step determines how the ad will be evaluated. There are three main objectives that you can choose from. Think about which of these may help you evaluate and optimize your next test:
Create a Null Hypothesis to Test
You’ll remember your statistics teacher explaining the difference between the null hypothesis and the alternative hypothesis. If this is your first time hearing these terms, don’t worry it really isn’t complicated!
In the context of advertising, think about hypothesis testing this way: you should assume that the ad element you’re testing has no impact on performance until you’re proven otherwise. Therefore, the null hypothesis is that there will be no significant improvement in performance due to the change in the ad element. The alternate hypothesis is that the change you brought will have an impact.
Your hypothesis should be as specific as possible. Focus your test on one specific creative element. For example, if you told me:
“I want to know whether or not adding text to my ad helps users better understand my product. Therefore I expect an increase in ad conversion rate after adding more text.”
Every ad network is different and specific instructions on how to create these tests goes beyond the scope of this article. However, we’ve linked some documentation from the top ad networks, which explains how tests are setup on their platform:
Luckily, most ad networks today like Facebook and Google Ads will provide you with tools to setup tests on display or video ads. These types of campaigns will split impressions evenly between ads to give each a fair chance of winning.
This means you won’t have to worry about whether you can trust the results or not. If you’ve setup your test correctly and the campaign has sufficient data, you can be confident in the outcome.
Some netwroks offer the ability to test multiple types copies without launching a A/B test campaign. For instance, Google’s responsive search ads (RSAs) allow you to test as many as 15 headlines at the same time. While it can be difficult to come up with 15 headlines at once, there is a tool for that. We developed Flowin specificly to help bulk generate and deploy search ads.
Unfortunately not all networks can provide these advanced tools. In certain ad networks you’ll simply have to create a fresh campaign and test your ads there. Since eixsting campaigns can have existing data influence the result, it’s far safer to start from a blank slate.
Now that you have created several versions of the ad and ran your test, it’s time to review and report the results. The goal of the test was to find out whether any of the variations worked better than the original. Unfortunately not all tests are conclusive. One of three things will happen after the test concludes.
If the test didn’t lead to significant changes in performance you could choose to keep running both ads in the original campaign. Make sure to communicate with your team that the test led to insignificant differences. You don’t want to accidently influence your creative team to take a certain direction if there’s no data to it back it up.
Once you find a winning combination, you really have to understand the reason one ad worked better than the other. If a single element was changed, then you know that this was the reason for the better performance.
If your test wasn’t sufficiently precise, you might not be able to really understand the performance. For example, if you changed the color of the ad from black to yellow and saw an improvement, how would you explain it? The reason for the improvement could be any one of the following:
As you can see even with a precise test, marketers and brand managers play a crucial role in interpreting results. Spend some time after every A/B test to think of whether your original hypothesis is confirmed with the test.
After this thinking process you’ll likely have a bunch of new ideas to test out. If your discussion around the test results in you thinking that warmer colors improve click-through-rates, what do you think is the next step?
If you guessed “Start All Over!”, you guessed right!
You can learn a lot about your audience by analyzing A/B tests. As you repeat the above process you’ll be able to learn more and more about your audience and find the types of ads that work on them.
After each A/B test take down results and try to identify patterns. All of these learnings should provide your team with a goldmine of ideas to share with the creative team and the rest of the marketing department.
Quick Tip: If there is no pattern or if the data isn’t consistent, try re-testing your same hypothesis on a new set of creatives.
You should always try and develop new ads that perform better than your old ones. After reading this article I hope I won’t have to sell you on the value of introducing A/B testing into your marketing department’s standard processes. However, before ending the article with takeaways and final tips let’s summarize the impact of A/B testing:
When it comes to A/B testing, best practices have evolved overtime, but certain principles remain important. If you’ve read this far you’re well on your way to creating the perfect A/B test. Here are 5 final tips to get you started on the right foot.
You might have realized that the above process can be very time-consuming. In order for your team to properly manage creative testing campaigns and measure results dedicated team members would be required.
Alternatively, Flowin offers technology that specifically addresses the challenges of A/B testing and in addition offers insight that is impossible to achieve without the use of AI.
Request access to Flowin if you’re interested in learning more about our tool!
There are two main creative testing methods that marketing departments make use of when running their tests:
After speaking of hypothesis tests, it might not surprise you that knowledge of statistics is crucial. Testing new ad ideas only delivers value if you can be confident in the results. Having statistical significant results simply means that you can be confident in your test.
While most ad networks will let you know whether or not a test is significant, if the network on which you’re testing doesn’t do this, you’ll have to work with a data analyst for this part. We’ll be covering this topic in an another article.
Whether your test led to a new top performing ad or an abandoned creative concept, make sure to save these learnings somewhere where all those involved in creative strategy can have access. New employees and colleagues in other departments who don’t deal with ads on a day-to-day basis will benefit from your internal database of creative learnings.
If your creative team can quickly deliver new variations to test we certainly recommend dedicating a budget to A/B testing. The benefit of ongoing tests is a constant flow of a new learnings, which enables you to better plan for the future.
After reading this introduction to A/B testing, you should feel confident enough to begin establishing your own creative testing process. Consistent testing offers a powerful tool to develop better creative concepts, measure your creative team’s effectiveness and improve your creative strategy.
Here are at Flowin, we’ve developed new technology to facilitate the most difficult part of the A/B testing process, extracting learnings.
The ultimate goal of A/B testing is to ensure consistent incremental improvements over time. We’re positive that with the right tools in hand, this goal can be achieved by all marketing teams.