The State of Data Quality – and How Developers Can Improve Data Efficacy
April 21, 2020

Rachel Roumeliotis
O'Reilly Media

Organizations are giving increasing attention to data quality. This doesn't come as a surprise — data is a valuable commodity that is quickly proving its worth in a growing number of cloud-native applications to make decisions about product development, how to market to customers, and more. It's not uncommon to see developers leverage insights stemming from customer data or use sensor data from a variety of connected IoT devices.

However, for data to be beneficial, it needs to be high quality. Without high quality data, organizations run the risk of making costly decisions, or missing opportunities that make them fall behind their competitors.

O'Reilly recently surveyed more than 1,900 practitioners who work with data and/or code and the people who directly manage them to take a look at the quality of the data organizations are using to power their analytics and decision-making. The research found that organizations are concerned about data quality — and at the same time are uncertain about how best to address those concerns.

Organizations' Top Data Quality Issue: Too Many Data Sources

By a wide margin, respondents cited the sheer preponderance of data sources as the single most common data quality issue. More than 60% said that they had to deal with too many data sources and inconsistent data, followed by 50% reporting disorganized data stores and lack of metadata and 47% reporting poor data quality controls at data entry.

There's good and bad in this. Unfortunately, reducing the number of data sources is hard. This has been an issue that has been prevalent since the 1990s and, with the rise of self-service data analysis tools, it's hard to say when — if at all — we'll be rid of multiple, redundant, and sometimes inconsistent copies of data sets.

On the flip side, technological progress has been made on front-end tools that generate metadata and capture provenance and lineage tracking — which are essential for diagnosing and resolving data quality issues. This, in addition to further education on data quality, data governance, and general data literacy, can help alleviate organizations' top concern — especially since only 20% of survey respondents reported that their organizations publish information about data provenance or data lineage.

Organizations aren't dealing with only one data quality issue, however. In addition to a deluge of data sources, a majority of respondents reported that they deal with either three or four data quality issues at the same time. Other common data quality issues include poor data quality from third-party sources, missing data, and too few resources to address data quality issues.

Improving Data Quality to Build Better Applications

While organizations have little control over third-party data — and missing data will always be something we'll need to grapple with — there are practical steps organizations and their developers can take as they embark on new projects to improve the quality of their data.

Obtain C-suite buy-in and support: Data quality has a tendency to be regarded as more of a people-and-process-laden problem than a technological one. Executives have a clear perspective on the impact data quality can have on business operations and strategy, and have the authority to spearhead data quality initiatives or help kick-start a data quality center of excellence.

Pick projects with clear business value: When it comes to commencing any project, it's important to pursue those that add significant business value to a new or existing business process. The same is true for data quality. Because data conditioning is not cheap, high costs should compel developers to take an ROI-based approach to how and where to deploy their data conditioning resources. This includes deciding what is not worth addressing.

Invest in AI: Artificial intelligence helps simplify and automate tasks, providing organizations with additional resources to address issues involved in discovering, profiling and indexing data. In fact, almost half (48%) of respondents reported that they use data analysis, machine learning, or artificial intelligence tools to address data quality issues.

While we might not be able to solve all of our data quality issues, it's important to implement practices that incorporate data quality as part of the development process. By placing importance on data quality, developers can make better choices about the applications they build — and ultimately set up their organizations for success.

Rachel Roumeliotis is VP of Content Strategy at O'Reilly Media
Share this

Industry News

November 13, 2025

JFrog announced an expansion of its AI governance capabilities within the JFrog Software Supply Chain Platform with the introduction of Shadow AI Detection.

November 13, 2025

Red Hat introduced the general availability of Red Hat Enterprise Linux 10.1 and 9.7, building on the innovations of Red Hat Enterprise Linux 10 for a more intelligent and future-ready computing foundation.

November 13, 2025

Solo.io announced the launch of agentregistry, a centralized, trusted, and curated open source registry for AI applications and artifacts.

November 12, 2025

Red Hat announced the general availability of Red Hat OpenShift 4.20, the latest version of the hybrid cloud application platform powered by Kubernetes.

November 12, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced a major new release of Helm, coinciding with the project’s 10th anniversary.

November 12, 2025

Mirantis announced the latest release of Mirantis k0rdent Enterprise, with Mirantis k0rdent Virtualization – enabling workloads to run with cloud-native applications and traditional virtualized workloads.

November 12, 2025

Couchbase announced significant advancements to the Couchbase Mobile platform, which makes it possible to run AI-powered applications on devices operating at the disconnected edge.

November 12, 2025

Legit Security announced VibeGuard, a solution designed to secure AI-generated code at the moment of creation and to secure coding agents.

November 12, 2025

Black Duck announced that Black Duck® SCA can now identify and analyze AI models, starting with the 2025.10.0 release.

November 10, 2025

Parasoft is showcasing its latest innovations in software quality assurance for safety- and security-critical embedded systems at embedded world North America, booth 8031.

November 10, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced new integrations between Falco, a graduated project, and Stratoshark, a forensic tool inspired by Wireshark.

November 10, 2025

CKEditor announced the launch of CKEditor AI, an addition to CKEditor that makes it a rich text editor to integrate multi-turn conversational AI.

November 10, 2025

BellSoft announced Hardened Images, a tool for enhancing the security and compliance of containerized applications in Kubernetes.

November 06, 2025

Check Point® Software Technologies Ltd. announced it has been named as a Recommended vendor in the NSS Labs 2025 Enterprise Firewall Comparative Report, with the highest security effectiveness score.

November 06, 2025

Buoyant announced upcoming support for Model Context Protocol (MCP) in Linkerd to extend its core service mesh capabilities to this new type of agentic AI traffic.