Data is moving to the center of marketing strategies across organizations. It’s no wonder why – it provides the foundation for insights that ultimately power your marketing department. Data can be used to better understand target audiences and what made certain marketing campaigns successful, resulting in better decision-making. By leveraging data in an accurate and agile manner, marketing teams can maximize profit and growth by reacting to opportunities in the right way at the right time.
However, marketing teams must verify the quality of data before making it the bedrock of their strategy. They must understand that low-quality data creates low-quality insights. Without reliable information, it’s difficult to attain the knowledge needed to make the right business decisions.
This is why the collection and analysis of low-quality data comes at a sizeable cost to organizations. According to Ovum Research, poor data quality costs American organizations about 30 percent of their revenue. To combat this, organizations must enact a comprehensive data quality initiative composed of people, processes, and technology - ensuring they avoid the consequences of bad data.
This will involve breaking down exactly what high-quality data means to the marketing team. Once the goals for data have been defined, teams can begin to distribute responsibility - weaving people, processes, and technology into the data-quality strategy.
Defining Data Quality
Data quality isn’t a single activity – it’s an ongoing effort with many moving parts. As marketers are often working with granular bits of data at a massive scale, they must get better data visibility while prioritizing the pieces of data quality that are most important to the organization. Consequently, marketers often prioritize a few specific aspects of data quality to resolve their individual organizational challenges.
Generally speaking, when grading data quality, analytics will refer to the following criteria:
- Accuracy – Is the information reflective of the truth? For example, is someone’s recorded mailing address their true mailing address?
- Completeness – Are all of the necessary data fields filled out in a usable way? For example, if your data records need to indicate a first and last name, that information must be included.
- Consistency – Is the information the same across all of your business units? For example, if someone is actively being sold to, but is listed as a “lost” lead in your CRM, it’s necessary to find alignment.
- Validity / Integrity – Can this data be connected to data elsewhere in the organization? For example, if customer data is assembled in one place, but it’s not attached to any existing customer’s name, then that data has low integrity.
- Timeliness – Is the data recent? The definition of timely data is very subjective, but organizations must pinpoint when data is too old to be useful.
- Transparency – Is the source of this data reliable and easy to identify? This helps ensure that data is reliable, while helping marketers understand the nuances between different sources.
- Representativeness – Does the data accurately reflect the state of the marketplace? For example, if the conclusions of an independently published study run contrary to your data, it should be examined to ensure it is reflective of reality.
When assessing the quality of data, marketing teams should consider these seven categories. Be aware that high quality data may be partially deficient in some of these key traits - it’s difficult to find data that completely fulfills every aspect of data quality criteria. Ultimately, marketing departments must intend to strike the right balance between data that is useful for their campaigns, and data that aligns with reality.
Involving the Right People
Data quality management is a resource intensive task – so organizations must delegate the right employees to certain, well-defined roles. This will require cross-departmental involvement and alignment from C-level managers.
To convey the merits of data quality to upper level management, try to explain the serious consequences that can come as a result of poor data quality. For example, could it mean massive fines and penalties due to noncompliance with a regulating body? Or, could it mean funding strategies that are ultimately ineffective? Then, back that anecdote up with a data-backed risk analysis to justify the creation of several job roles, and the acquisition of new products and technology.
By creating these new roles, your organization will ensure that data is properly handled from the moment its collected.
- Data Owner – Data owners take on the responsibility of controlling the access to data, thus controlling risk. These individuals are heavily concerned with the processes, rules, and requirements to use data within the organization
- Data Steward – This role ensures that data is given proper meaning, and that data is used correctly. Data stewards are typically able to understand who uses each dataset, how it is used, and which processes should be implemented to ensure governance for that data.
- Data Custodian – Data custodians are often involved with the IT department – they manage servers and networks, conduct backups, and provision access to users with the data owner’s permission. They also aim to resolve concerns involving data integrity and quality.
It’s important to delegate these roles in an effective way. The best way to assign responsibility will depend on the size of your organization and the subsequent size of your data stores. For instance, small businesses may be able to assign one person to all three roles – while an enterprise may want one person to fulfill each role.
Implementing the Right Processes
After determining data management positions, it’s necessary to implement processes that will support employees involved in the data quality initiative. These processes must be identified and outlined before launching a data quality initiative:
- Data Governance Framework – This framework sets the guidelines and rules for activities that involve creating or manipulating data. This includes outlining the people, processes, and technologies that will be used in your data quality initiative.
- Business Glossary – This defines data-related terms across your entire organization, and allows everyone in the organization to agree to standard definitions. With a business glossary, users across your organization can understand exactly what specific terminology, rules, and policies mean.
- Data Quality Issue Log – By maintaining a data quality issue log, your organization will enjoy a structured method of tracking data quality incidents. With this document, your organization can rank the severity of these incidents, and begin to find patterns that may be causing systemic data quality problems.
When creating these processes, know that they simply provide assistance with helping your organization ensure the reliability of your data. As a result, don’t expect all data to be entirely compliant, accurate, and consistent after enacting your data-quality processes. Even the best methods are not flawless - and creating well-oiled data quality processes takes a lot of time, effort, money, and knowledge that can only be built over time.
Instead of seeking perfection, set certain limits to ensure your standards of data quality are upheld to a specific level of compliance. This means first determining what each field of data would look like if it was filled out correctly. For example, a data field for “Full Name” should include at least one space, or else it should be flagged as low quality data. Or, the data field for “Date of Birth” should be flagged if the person’s age would be greater than 123, as this would make them the oldest human to ever exist.
Next, set a level of tolerance for inaccuracy in these records. For example, lead-based data may require that you know every customer’s full name for outreach purposes. This means this field may not be considered legitimate if it is flagged for inaccuracy. In that same dataset, it may be less important to know their date of birth – therefore, its acceptable to only have 70 percent of the data in the “date of birth” field be accurate.
Using the Right Technology
By leveraging the right technology, your data quality team will receive the necessary support to ensure that data quality processes are running to the best of their ability. When looking for technologies to assist with data quality, Gartner advised looking out for these traits:
- Parsing and Standardization – To attain compliance with industry standards, local standards, and your business’s own standards, its important to format data into consistent layouts. This will help your organization access and analyze values and patterns.
- Cleansing – Data values may need to be tweaked to fix minor inaccuracies, ensure standardization, or otherwise adhere to your business’s rules. While this can be done manually, organizations with large quantities of data may wish to invest in automated tools instead.
- Matching – If related data exists across different datasets, it’s advisable for organizations to have the technology to identify, link, and/or merge the data from these sources together. For example, it’s useful for marketers to have a tool that can correlate both online and offline data sources.
- Profiling – This technology will help your organization capture data about data – also known as metadata. By setting standards for metadata, your organization can proactively evaluate data for quality concerns.
- Monitoring – Monitoring tools ensure that data is adhering to the data quality standards set forth by your business.
- Enrichment – Enrichment tools connect related consumer data, like demographic or psychographic information, from external sources. This enhances the value of the data and enables organizations to make well-informed insights.
Marketing teams may also find it beneficial to use a marketing analytics platform that can provide better context for the numbers and figures generated by data - especially if it leverages third party data. This gives fundamental context and statistical backing to otherwise inconsequential bytes of data.
Organizations should also seek out technology that allows them to engage in advanced forms of attribution and measurement, such as multi-touch attribution and media mix modeling. This allows marketing teams to see a complete and accurate view of data, resulting quality first-party metrics. Overall, better attribution and analytics will help your marketing team make the best data-driven decisions possible.
While data is powerful, marketing teams must be sure that data is being properly vetted to prevent driving campaigns in the wrong direction. However, don’t forget that data quality shouldn’t only be the concern of marketing: A concerted attempt to improve data quality must involve alignment from all corners of the organization. After getting all team members on-board with a data quality initiative, marketing all will be able to implement the right processes and technology to assist in their data-driven campaigns.