Back to Blog

Experiment Better and Get More from Your Media Budget: Five Tips for Tests That Lead to Bottom Line Value

Last updated: July 2, 2018

marketing-roi.jpg

A surefire way to gain advantage in marketing is to experiment. As long as the experiments unlock new ways to achieve higher ROI, the experiments are accretive to the business. However, there are bad tests. These are tests where one fails to find a path to upside for the business or add to the understanding of potential pitfalls. I would like to share some thoughts that may help you separate the two, so you can deliver bottom line value through more successful experimentation.

marketing-roi.jpg

Consumer behavior and marketing options are changing rapidly. Those that fail to test, learn and evolve are destined to see their business negatively impacted. We all seem to know we need to experiment and find new ways of increasing the return on your marketing investments – however, we don’t necessarily know how to prioritize tests. Many have been burned by a bold idea that didn’t work out. They went all in on an idea that didn’t deliver the returns they wanted or the results they needed.

Most marketers don’t have a screening system to weed out tests that are wastes of time and resources. So how do you make sure the tests you’re planning are worth the effort? Based on our experience we’ve outlined a simple process to help you minimize the risks and increase the chances of finding that next big idea.

TIP 1: Define the Value You Are Trying to Create

impact-cost.png

 Like a good scientist, marketers need to have an understanding of the problem they’re trying to solve, and their best guesses for potential solutions. The more clearly you define the challenge the easier it will be to design your tests and interpret your results. Your hypothesis should be actionable. In the biggest sense, we all want our marketing to be more effective. But, that isn’t specific enough to ensure good measurement. Are you trying to solve a particular problem in a consumer journey, such as conversion in-store? Or website cart abandonment? Are you trying to get consumers to put your brand into a consideration set for purchase? Are you trying to increase customer sharing/referrals? Define specifically what you are trying to accomplish and the testing you’ll need to do becomes clearer.  

TIP 2: Identify Options with the Highest Potential

Contrary to popular belief, there are bad ideas. When applied to marketing, generally speaking, bad ideas are those that won’t generate an above average ROI when compared to current marketing activities. Nobody is looking for a less efficient way to drive website visits, online sales, or store traffic. Therefore, you’ll find it important to have a simple way to estimate the likely ROI of a marketing concept before you invest resources in executing the program, and measuring the ROI.

One way we have found effective when trying to identify the best options is running a quick back-of-the-envelope Spend to Impact Response Function (SIRF) analysis. SIRFs are the raw materials of return on investment. Here are the numbers you should include:

  1. Spend: Cost of program (separate production from distribution)
  2. Impact (Estimate diminishing return for frequency)

Remember, our goal in testing is to improve our marketing ROI over that of our current program. With these inputs, you can estimate the ROI. This shouldn’t take much effort and can be done very quickly. Just compare the results to your average ROI and see if you have a shot at increasing marketing productivity.

 

 

TEST 1

TEST 2

Test Cost/Thousand

 

$15

$25

Guesstimated Lift

 

+5pts

+2pts

People Impacted/1,000

Lift x 1,000

+50 people

+20 people

Implied ROI

Lift/Investment

$3.33

$0.80

Test ROI > Current ROI

Current ROI $1.50

TEST

DO NOT TEST

If you run this number and the ROI doesn’t pencil out, table the idea, and move on to the next idea on the list. You’re trying to move fast and want to identify those ideas that have the highest potential of increasing your business results.

TIP 3: Determine if the Option Will Scale

If your idea passes the back of the envelope check, you should move on to answer the question, “Will this idea scale?” Can the idea you’re testing be repeated or is it a one-time opportunity? Does it work with a small group of your market or will a significant number of potential customers respond?

In the example above, we used a simple calculation of reaching 1,000 people. We only included the media cost – we didn’t include any diminishing returns. Diminishing returns are like Newton’s laws: universal and seemingly inescapable. Every marketing activity we have ever measured inevitably reaches a point after which its ROI declines.

Now, we’ll bring this to bear in our assessment of an idea’s potential. Remember, we’re just trying to figure sort out which ideas have the best chance of success. All you need is an assumption of how reach and frequency build and how impact may decline.

Here’s how this works: In any test of a marketing effort your first impression generates the most impact. Your effort will capture the low hanging fruit first! Every subsequent impression generates a smaller impact. The ROI of a marketing program decreases as the program scales and you reach more people or reach the same people more often.

In order to validate that scaling the idea is possible, we want to run the numbers. Don’t worry, this is easy, too. We just need to approximate the diminishing return of higher reach and frequency. Here’s how to do it.

Test Learn.png

In this example, +6 pts from first impression, another +3 from second impression, and +1 from third impression. Now combine this impact by frequency with a reach & frequency curve.

If there is a reach and frequency curve, use it. If not, pick your time period, such as a month and ask, “How many impressions can I get? How many unique people will you reach?” It is OK to guess. You are just trying to provide a framework to explore the ROI possibilities of the idea.

The question to ask is, “does this idea scale to influence enough people at an ROI that is acceptable?” If you are a marketer with millions of customers, a program that can only reach 1,000 people even with big impact, likely won’t pencil out. The exceptions are dynamic targeting, where you get both scale and efficient targeting – but that is another story.

Tip 4: Do a Quick Sensitivity Test

Imagine two different reach and frequency scenarios:

In the first, a large number of people are reached by your message, but the majority see your message only once. This example would give you more reach, but less frequency.

In the second, fewer people may see your message, but they see it more often. This example would give you less reach, but more frequency.

As mentioned in our third tip, most of your immediate ROI from a marketing effort will come from those customers already predisposed to respond to you – The low-hanging fruit. Subsequent efforts to reach this audience (more frequency) will logically be targeting those who are harder to sell and therefore less likely to respond. The implication is that the ROI will be lower where more frequency is delivered.

Test Learn1.png

The question from the sensitivity testing is, “Can you control reach and frequency?” Can you apply frequency capping? If these impressions are coming from a media provider, can they guarantee a certain level of reach with their impressions? The sensitivity testing will help you focus your attention on the efforts most likely to deliver the increased ROI you seek.

Tip 5: Get the Right Data from Your Test

If everything penciled out, you are ready to go live. Circle back to the hypothesis we set in Tip 1. You will want to make sure you’re looking at the right data to assess the real world impact your idea is having on your business outcome. For example, one customer was testing some new messaging in their SEM ads. On first blush the data was impressive: Visits to their website were way up. However, when they looked at what really mattered, conversion, they saw that most of this traffic was bouncing – leaving the site without taking any meaningful action. They were “empty calories.”

A proper control group and exposed group with randomization is the gold standard. Marketers use many tactics to get good test results; focus groups, test markets, heavy-ups, A/B tests, split testing, comparing email open rates, etc. Whatever tactic you use, remember, you need to understand how it will scale. Make sure you’re gathering data that gives you the potential lift at each frequency level to help you scale it. Returning to our example, we wanted to be able to analyze if certain types of people had higher or lower response than others. This will help you intelligently scale the experiment based on empirical data.

Here is a bonus tip: Testing Your Way Up the Response Curve

If your test is successful, and you learn there is an opportunity to solve a problem and improve performance, you’ve uncovered a competitive advantage. The next step is to ratchet your way up to improved results. In the chart below, each dot is a test & learn as you scale the budget up.

The two dots at $15m represent an A/B test within the media to find “best practices.” As budgets get larger, we suggest running experiments within to uncover better ways of generating value. In this case, the marketer wanted to integrate different messages based on what is known about the consumer receiving the advertisement. While one of the strategies hurt results, the other strategy significantly improved results. This unlocked a new insight about how to boost the impact, and effectively put the ROI on a new trajectory. This insight carried over to the $22 million spend. This becomes a new benchmark performance against which future tests and learns will be judged.

Test Learn2.png

Final Thoughts

I am a big believer in Test and Learns. I’ve baked them into my company’s culture and they’re a part of every one of our customer engagements. In fact, our ROI Brain is a compilation of millions of tests and learns conducted with thousands of campaigns across hundreds of marketers. We have collected data on the lift by frequency, and impact by different target audiences. We’ve kept every creative message in our database so we can go back and look for commonality among the best performing versus the worst performing experiments. We keep track of who saw which ads in which place and at what time. This has been incredibly valuable asset for our customers who can unlock the potential of this meta-analysis to benefit their business.

The process we’ve outlined above can help you unlock more potential for your business through Tests and Learns. It will also lead to a database of institutional knowledge that becomes a strategic asset for your company. We recommend every marketer dedicate at least a portion of their time and budget to challenging their existing plans and uncovering the next big idea. Fortune favors the bold.

Test. Learn. Evolve!

Written by Marketing Evolution