Artificial intelligence is rapidly moving from experimentation to embedded decision-making. Yet despite unprecedented advances in AI models, platforms, and automation, one fundamental truth remains unchanged: “AI is only as good as the data it relies on.” Poor data quality no longer just leads to incorrect reports, it results in biased models, unreliable predictions, and automated decisions that cannot be trusted or explained. As we rely more on AI, the damage poor data may cause gets amplified.
This reality is not new. It has been well understood for decades by pioneers in data quality, most notably Larry English (1947-2022), whose work continues to shape how organizations should think about data as a managed business asset rather than a technical byproduct of some primary operational process.
A personal journey shaped by Larry English
My own perspective on data quality has been profoundly shaped by my personal journey with Larry English. Early on in my career, I had the opportunity to spend four intensive weeks in Nashville, participating in his Total Information Quality Management (TIQM) training program. That experience fundamentally changed how I viewed data — shifting my focus from systems and tools to social dynamics wrt accountability, ownership, and business impact.
Over the years, I also had the privilege of inviting Larry to Belgium for several speaking and training sessions, where he inspired organizations and professionals alike with his clarity, rigor, and conviction. To this day, I continue to rely on his principles and methods to address data quality challenges. More importantly, Larry’s work inspired me to put data and data quality at the center of everything I do — a focus that has only become more critical in today’s AI-driven world.
Larry English passed away in 2022, but his legacy lives on in the way modern organizations approach data governance, data quality, and AI readiness. He wrote a few seminal books, that are as relevant now, as they ever were.
AI raises the stakes for data quality
Artificial intelligence does not eliminate data quality problems — it amplifies them. In traditional BI environments, poor data quality might lead to mistrusted dashboards or the need to rely on onerous manual corrections. In AI systems, the consequences are far more serious: flawed training data leads to flawed models, and those flaws scale instantly —and often invisibly!— across automated processes.

This is precisely why Total Information Quality Management (TIQM) remains essential. TIQM frames data quality as a management-led, continuous discipline, not a one-off technical cleanup. Its emphasis on business ownership, stewardship, root-cause analysis, and continuous improvement provides the structural foundation required to build AI systems that are reliable, explainable, and aligned with business goals.
Without these principles, AI initiatives run the risk of becoming sophisticated amplifiers of existing data problems.
BARC research confirms the message
What Larry English articulated decades ago has been consistently confirmed by BARC’s annual surveys and research. Across analytics, BI, and AI studies, BARC repeatedly identifies data quality and trust in data as some of the most critical success factors — and at the same time, among the most persistent barriers to success.
Even organizations that invest heavily in modern cloud platforms, data lakes, and AI tooling continue to struggle when data ownership is unclear, responsibilities are fragmented, or improvement efforts lack continuity. These findings reinforce a core TIQM insight: technology alone cannot solve data quality problems. Or as Jerry (Gerald M) Weinberg (1933-2018) has put it: “No matter what it looks like, it’s always a people problem.”
Many contemporary initiatives — from AI readiness frameworks to data governance programs and data observability platforms — implicitly build on TIQM concepts such as stewardship, cost-of-poor-quality measurement, and continuous improvement. In that sense, TIQM is not a legacy methodology; it is the foundational backbone of modern data and AI practices.
TIQM as a foundation for sustainable AI
As organizations push AI deeper into core processes, decision-making, and customer interactions, the need for disciplined data quality management becomes a non-negotiable requirement. AI success depends not just on better algorithms, but on better-managed data.
TIQM provides exactly that foundation: a proven framework that aligns people, processes, and technology around the shared goal of business value from trustworthy information. It ensures that AI initiatives are not built on fragile assumptions, but on data that has clear ownership, is governed, measured, and continuously improved.
A legacy that still guides the future
For me, this is not an academic discussion. The principles of TIQM continue to guide how we help organizations build data foundations that are fit for AI. What Larry English taught — and what BARC’s research continues to validate — is that trustworthy data does not happen by accident. It requires leadership, discipline, and long-term commitment.
As AI becomes increasingly embedded in how organizations operate and compete, revisiting and applying the fundamentals of TIQM are not a step backward. It is a necessary step forward — to ensure that AI delivers value that is reliable, explainable, and sustainable.
![Data Analytics Insights [DA-I.info]](https://usercontent.one/wp/www.da-i.info/wp-content/uploads/2025/07/customcolor_textlogo_transparent_background-scaled.png?media=1766040832)
Leave a Reply