Mastering Campaign Structure: How to Rely on Data to Guide Testing and Launching Successful Ads
Are You Testing Creatives Effectively for Maximum Performance?
As the head of a data-driven digital marketing agency, I’ve seen firsthand that one of the most crucial components to running successful campaigns is mastering campaign structure and how to launch creatives for testing.
In a landscape where every dollar counts and clients expect results, how we structure campaigns and make ad decisions can make or break their success. With that in mind, I’m going share some of the key principles we follow at Brighter Click to ensure that we’re not only gathering the right data but making smart, data-informed decisions throughout the process.
Structuring Campaigns for Creative Testing
When it comes to setting up campaigns, one of the most hotly debated topics is whether to use Campaign Budget Optimization (CBO) or Ad Set Budget Optimization (ABO). At our agency, we strongly lean toward ABO for creative testing.
Here’s why:
- ABO allows us to have complete control over where the budget is allocated.
- In CBO, Meta’s algorithm automatically allocates more budget to the ads it believes will perform best.
While that sounds great in theory, it often leads to a situation where the algorithm disproportionately pushes spend to a few ads, leaving other creatives without a fair shot at generating enough data to make an informed decision. Especially in a creative testing phase, it's critical that every ad gets the same opportunity to gather meaningful data.
For us, running a separate creative testing campaign under ABO ensures that every creative gets a fair amount of spend. This is vital in testing because you want to control as many variables as possible. If one creative hogs all the spend, you're left wondering if the others could have performed just as well if they had been given the same chance.
Timeline and Budget for Creative Testing
Once you've set up your creative testing campaign, the next step is deciding how much time and budget to allocate to testing each ad. A common mistake many marketers make is cutting tests short or not spending enough for the results to be statistically significant.
The guiding principle we use is to let a creative spend two times the target CPA (cost per acquisition) before making a decision.
So, if your target CPA is $50, you should allow an ad to spend $100 before determining whether it’s a winner or a dud. This method gives the algorithm enough room to optimize for conversions, allowing creatives to reach their full potential.
However, we also never run tests for less than three days. You have to give enough time for external factors—like market fluctuations, weekend shopping behavior, or even competitive ad spend spikes—to level out. For instance, launching ads on a Friday heading into the weekend often leads to poorer performance because consumer behavior shifts.
This isn't a reflection of the ad itself but rather an external factor that could skew results if not accounted for. Additionally, external events, like a competitor's aggressive spending during a launch, could inflate CPMs (cost per thousand impressions), which might mislead you into killing an ad prematurely.
We also aim to ensure each ad has sufficient volume to assess performance. A good rule of thumb is to wait until the ad has accumulated at least 3,000 impressions before reviewing its early performance metrics. This volume provides enough data to make an informed decision without being swayed by short-term fluctuations.
Decision-Making: How to Know When to Kill an Ad
Now comes the million-dollar question: When do you kill an ad?
This is where both soft and hard metrics come into play. Soft metrics include things like click-through rate (CTR), hook rate, or engagement rate, while hard metrics focus more on direct performance indicators such as CPA, return on ad spend (ROAS), and cost per add-to-cart or purchase.
At Brighter Click, we don’t jump to conclusions based solely on performance metrics in the early stages of testing. We pay close attention to soft metrics first, especially for new creatives. For instance, if an ad is getting a poor CTR but has a solid conversion rate once people hit the landing page, this tells us the ad might be working despite the low initial engagement. Conversely, if CTR is high but there are no conversions, it may be a sign that the ad is misleading or mismatched with the landing page content.
Typically, we look at soft metrics after 3,000 impressions. By this point, you can gauge whether the ad attracts attention and engages users, even if conversions haven't yet materialized. If the soft metrics are promising, we might give the creative more time to convert.
When it comes to hard metrics, one of the most critical decisions revolves around cost per view content or cost per add-to-cart, depending on the campaign’s goal. Comparing these metrics with your top-performing ads gives you an idea of whether your new creative is on track. But don’t just rely on raw volume; focus on the cost per action to ensure that you’re making apples-to-apples comparisons.
For example, if an ad has only two add-to-carts, don’t dismiss it outright. If the cost per add-to-cart aligns with your benchmark, it could still be a winner with the right optimization or scaling strategy. Similarly, you might have an ad with hundreds of clicks but a low conversion rate, which signals that you need to tweak the landing page rather than discard the creative altogether.
Avoiding the "Creative Testing Dumpster"
A key point to remember is not to let your creative testing campaign turn into a budget sink. It’s easy to justify poor performance by saying, “Oh, that’s just creative testing.” But testing should be done strategically with a set budget, typically between 10-20% of the overall campaign spend. This way, you're not risking too much of your client's money while still gathering meaningful data to inform future decisions.
Moreover, avoid wasting budget by ensuring that each ad set has enough spend to gather data. Meta's algorithm needs a minimum of 20 conversions per ad set per week to start optimizing properly. So, if your CPA is $70, and you're only spending $10 a day on creative testing, you're not giving the system a real chance to work. It’s better to allocate enough budget for each ad set to potentially achieve at least 20 conversions a week. Otherwise, the data you gather is practically useless, and the ad set won’t exit the learning phase.
Putting Ad Metrics in Context
Finally, one of the biggest lessons we’ve learned over the years is that no single metric matters in isolation. While CAC or CPA are key indicators, they don’t tell the whole story. You need to understand how metrics interplay. A high CTR is meaningless if it’s driving unqualified traffic that doesn’t convert. Similarly, a high CPC isn’t necessarily bad if it’s paired with a high conversion rate and a strong ROAS.
At the end of the day, it’s not about chasing metrics like CTR or CPC just to “improve” them. It’s about creating a holistic strategy that drives down CAC and improves overall performance across the entire funnel.
In short, campaign structure and testing are essential to running successful ad campaigns. Give each creative a fair shot, monitor both soft and hard metrics, and ensure you’re spending enough to gather reliable data. When done correctly, you’ll have a testing strategy that drives real results, not just pretty numbers.
Want More Strategies for Success?
Check out The Marketing Mindset Podcast, where I interview industry leaders, marketers, and CEOs from a wide range of industries every week!
You can also sign up for our Marketing Mindset email newsletter to get all the latest marketing trends, strategies, blogs, and tons of exclusive content sent directly to your inbox!