Never before has there been a greater need for a reliable, holistic marketing measurement tool. In a world of fractured media and consumer interest, intense competitive pressure, and lightening-speed product innovation, the sheer volume of data that must be analyzed and the decisions that must be made demand a more evolved approach to attribution and decision making. This need for speed has brought into bright focus a mandate for reliable, consistent and valid data, and the potential for challenges when there are errors.
The attribution category has been evolving quickly over the past decade, and there are myriad options from which marketers can choose. Recent research conducted by Forrester suggests that leading marketers are adopting the newest and most advanced approach: Unified Measurement or Total Marketing Measurement models. This analysis combines the attributes of person-level measurement with the ability to measure traditional channels such as TV. Marketers who upgrade to and invest in novel solutions – financially and organizationally – can find a competitive advantage from smarter attribution.
The greatest of these instruments answer problems such as the optimal frequency and reach in and between channels and determine which messages and creative are best for which audiences. New advances in these products are providing even more granular insights concerning message sequencing, and next-best message decisioning based on specific audiences and multiple stages of their buying processes. The best of these solutions incorporate external and environmental circumstances such as weather, travel patterns and more. Furthermore, capabilities of today’s solutions produce insights in such a timely fashion that agile marketers can include those insights into active campaigns to drive massive performance gains, rather than waiting for weeks or months to see returns.
However while these attribution models have evolved a long way in recent years, there is one challenge that all must tackle: the need for reliable, consistent and valid data. Even the most advanced and powerful of these systems are dependent on the quality of the information they ingest. Incorrect or sub-par input will always produce the wrong outputs. Data quality and reliability have become a primary focus of marketing teams and the forward-thinking CMOs who lead them.
If the data are not accurate, it doesn’t matter what statistical methods or algorithms we apply, nor how much experience we have in interpreting data. If we start with imperfect data, we’ll end up with erroneous results. Basing decisions on a conclusion derived from flawed data can have costly consequences for marketers and their companies. Inaccurate data may inflate or give undue credit to a specific tactic. For example, a model may indicate that based on a media buy a television advertisement –usually one of the most expensive of our marketing efforts – was responsible for driving an increase in visitors to our website. But, if this ad failed to air, and there is inaccurate data in a media log, the team may wrongly reallocate budget to their television buy. This would be a costly mistake.
In fact, inaccurate data may be one of the leading causes of waste in advertising. These inaccuracies have become an epidemic that negatively impacts both advertisers and the consumers they are trying to reach. Google recently found that, due in large part to bad data, more than 56 percent of ad impressions never actually reach consumers, and Proxima estimates $37 billion of worldwide marketing budgets go to waste on poor digital performance. And that’s just digital. The loss for major players who market online and offline can be extensive, and it’s calling for a revolutionary new approach to data quality and reliability.
So, how accurate is your data? Do you know if there are gaps? Are there inconsistencies that may queer your results? Many of us put a great deal of trust in our data systems leaving us forgetting to ask these critical questions. You can’t just assume you have accurate data – now more than ever you must know you do. That may require some work up front, but the time you invest in ensuring accurate data will pay off in better decisions and other significant improvements. Putting in place, from the start and early in the process, steps and checks to ensure the timely and accurate reporting of data is key to avoiding costly mistakes down the road. Solving these problems early in your attribution efforts helps build confidence in the optimization decisions you’re making to drive higher return on investment and, perhaps more importantly, will help teams avoid taking costly missteps.
When it comes to attribution, it is especially critical to make sure the system you are relying on has a process for analyzing and ensuring that the data coming in is accurate.
Below are four key considerations, when working with your internal analytics staff, agencies, marketing team and attribution vendor, you can use to unlock more positive data input and validation to ensure accurate conclusions.
The entire team should have a clear understanding of when data will be available and, more importantly, by what date and or time every data set will arrive. Missing or unreported data may be the single most significant threat to drawing accurate conclusions. Like an assembly line, if data fails to show up on time, it will stop production for the entire factory. Fortunately, this may also be one of the easiest of the challenges to overcome. Step one is to conduct an audit of all the information you are currently using to make decisions. Map the agreed upon or expected delivery date for every source. If you receive a weekly feed of website visitors, on what day does it typically arrive? If your media agency sends a monthly reconciliation of ad spend and impressions, what is the deadline for its delivery?
Share these sources of information and the schedule of delivery with your attribution vendor. The vendor, in turn, should develop a dashboard and tiered system of response for data flow and reporting. For example, if data is flowing as expected, the dashboard may include a green light to indicate all is well. If the information is a little late, even just past the scheduled date but within a predefined window of time, the system should generate a reminder to the data provider or member of the team who is responsible for the data letting them know that there may be a problem. However, if data is still missing past a certain point, threatening the system’s ability to generate optimizations, for example, an alert should be sent to let the team know that action is needed.
You, members of your team, and your attribution partner need a clear understanding of what specific data is included in which report and in what formats. It would be a shame to go through the hard work of making sure your information is arriving on time only to find out that the data is incomplete or reported inconsistently. To use the assembly line analogy again, what good is it to make sure a part arrives on time if it’s the wrong part that’s delivered?
Like quality control or a modern-day retinal scan, the system should check to see if the report matches expected parameters. Do the record counts match the number of records you expected to receive? If data from May was expected, do the dates make sense? And, is all the information that should be in the report included? Are there missing data?
With this system in place, a well-configured attribution solution or analytics tool should be able to test incoming data for both its completeness and compliance with expected norms. If there are significant gaps in the data or if data deviates overmuch from an acceptable standard, the system can again automatically alert the team that there may be a problem.
Your attribution provider should be able to use data previously reported from a source to help identify any errors or gaps in the system. For example, you can include in your data feed multiple weeks or months of previously reported data. This feed will produce one new set of data and three previous sets of overlapping data. If overlapping data does not match that will trigger an alert.
Now you’ll want to determine if the data makes sense. You want to see if new data is rational and consistent with that which was previously reported. This check is a crucial step in using previously published data to confirm the logic of more recent data reported.
Here, too, you can check for trends over time to see if data is consistent or if there are outliers. Depending on the specific types of media or performance being measured a set of particular logic tests should be developed. For example, is the price of media purchased within the range of what is typically paid? Is the reach and frequency per dollar of the media what was expected?
Leading providers of marketing attribution solutions are continually performing these checks to ensure data accuracy and consistent decision making. With these checks in place, the marketing attribution partner can diagnose any problems, and the team can act together to fix it. This technique has the added benefit of continuously updating information to make sure errors, or suspicious data, don’t linger to confound ultimate conclusions.
One note here that should be taken into account: outliers are not necessarily pieces of bad data. Consider outliers as pieces of information that have not yet been confirmed or refuted. It is a best practice to investigate outliers to understand their source, or hold them in your system to see if they’re not the beginnings of a new trend.
Finally, there are tangible benefits to confirming data from multiple data sets. For example, does the information about a customer contained in your CRM conform with the information you may be getting from a source like Experian? Does data you’re receiving about media buys and air dates match the information you may be receiving from Sigma encoded monitoring?
Even companies that are analytics early adopters find themselves challenged to ensure the data upon which they rely is consistent, reliable and accurate. Marketers understand that they have to be gurus of data-driven decision making, but they can’t just blindly accept the data they are given.
Remember, as we have mentioned, despite the potential benefits of a modern attribution solution, erroneous data ensures their undoing. To be certain your process is working precisely, create a clear understanding of the data and work with a partner who can build an early warning system for any issues that arise. Ultimately, this upfront work ensures more accurate analysis and will help achieve the goal of improving your company’s marketing ROI.
As a very first step, since data may come from multiple departments inside the company and various agencies that support the team, develop a cross-functional steering committee consisting of representatives from analytics, marketing, finance, as well as digital and traditional media agencies; the steering committee should have a member of the team responsible for overall quality and flow. As a team, work together to set benchmarks for quality and meet regularly to discuss areas for improvement.
In this atmosphere of fragmented media and consumer (in)attentiveness, those who rely on data-driven decision-making will gain a real competitive advantage in the marketplace. Capacities of today’s solutions produce insights in such a timely fashion that the nimblest marketers can incorporate those insights into active campaigns to drive massive performance improvements, rather than waiting for weeks or months to see results. But the Achilles heel of any measurement system is the data upon which it relies on generating insight. All other things being equal, the better the data going in, the better the optimization recommendations coming out.
© 2019 Marketing Evolution, Inc.