
Everyone's racing to deploy artificial intelligence (AI), but few are stopping to ask whether the data powering these systems is trustworthy.
According to Andy Bell, SVP of Global Data Product Management at Precisely, the reality is sobering. "Only around 12% of businesses actually trust the data they're feeding into AI," he said. That statistic alone should stop any C-suite in its tracks.
In this conversation recorded on Tech Talks Daily, he explained why this trust gap is the Achilles' heel of enterprise AI.
Key Takeaways
Only 12% of organizations trust the data used in their AI systems.
Agentic AI without traceable data is a governance risk, especially in regulated sectors where explainability and accountability are non-negotiable.
Internal data alone is often too narrow for robust AI outcomes, but external data enrichment adds essential context that drives better decisions.
Real-world examples show measurable ROI with one delivery company saving $65 million by improving address accuracy using Precisely's enriched datasets.
Natural language access to third-party data is emerging fast, enabling business users to query complex data sources without needing engineering support.
AI Doesn't Fix Bad Data, It Amplifies It
Data integrity isn't a technical nice-to-have. It's the baseline for any AI initiative that's meant to scale. Yet, what we're seeing across the board is a rush to experiment and build proofs of concept without addressing the real structural problem of poor AI data quality.
Enterprises are layering large language models (LLMs) and AI agents on top of fragmented datasets, often filled with inconsistencies and blind spots. That disconnect is costing businesses more than just performance. It's eroding trust internally and externally.
Bell told me:
"If you don't have really good data, or you can't trust that data, we can see the hallucinations, we can see the bias, and the unreliable outcomes."
The bigger problems arise when these outcomes impact customer interactions and mission-critical systems. It's no longer just a case of "garbage in, garbage out." It's risk in, trust out.
When Agentic AI Goes Rogue
Agentic AI is adding another layer of complexity. If the data it's working from is flawed, the decisions will be too, only this time, they might be happening at scale and without human oversight.
📍 “Why did your AI make that decision?”
— Neil C. Hughes (@NeilCHughes) September 2, 2025
🤖 “No idea. The system did it.”
If you can’t trace a decision, you can’t trust it, especially in regulated industries. Who’s accountable when AI gets it wrong? @PreciselyData’s Andy Bell raises the question. https://t.co/liCTsBbny0 pic.twitter.com/bncRVhFcik
Data consistency and database integrity must be built into the AI lifecycle from the outset. You can't expect explainability, accountability, or fairness without a trusted data foundation. Traceability, Bell argues, is no longer a luxury. It's a requirement.
It's not just about stopping bad outcomes. It's about enabling good ones, with confidence. For agentic AI to be safe, scalable, and trustworthy, it needs to rely on data context-the relationships between values, locations, histories, and metadata that tell the whole story.
The Case for Contextual Intelligence
Most internal data only tells part of the story. That's where data enrichment comes into play.
Bell explained:
"Lots of organizations have their own customer data, but it's peculiar to their lens. What we bring is the outside world – the location, the neighborhood, the demographics, even the risks tied to a given address."
Bell shared how a major food delivery company was struggling with inaccurate drop-offs. Their drivers were regularly delayed or rerouted due to poor address data. But after implementing Precisely's data enrichment and geolocation datasets, the company was able to improve on-time deliveries and save $65 million in reduced customer complaints and lost deliveries. Their drivers even began receiving bigger tips.
Elsewhere, San Bernardino County in California used Precisely's wildfire risk models to plan more effective evacuation strategies. Predictive data tied to location context allowed local officials to prioritize vulnerable sites, such as care homes and hospitals.
These stories demonstrate the value of data integrity. The measurable impact of enhancing operational efficiency and reducing costs, and even saving human lives, is a refreshingly positive AI story when so many businesses are struggling to find ROI from their projects. That's the value of data context and why it should be at the heart of every AI discussion.
You know your customers, but not the ones you’re missing.
— Neil C. Hughes (@NeilCHughes) September 2, 2025
Internal data gives a narrow lens. That’s why more orgs are turning to external data enrichment for a fuller view of the market, risks, and opportunities. Context is everything.https://t.co/liCTsBbny0 pic.twitter.com/QWYGLoL6v7
Even when businesses recognize the value of third-party data, they often struggle to implement it effectively. Bell didn't shy away from the challenge.
"Post-sale onboarding is frequently more expensive and time-consuming than the initial purchase," he noted. That complexity often scares teams away from external datasets-even when they offer real value.
The core issue isn't just technical. It's operational. Teams are often faced with outdated data formats, multiple vendors, and disconnected delivery mechanisms. As AI systems grow more complex, that inefficiency multiplies. You can't afford to spend weeks cleaning and mapping data every time a new source is introduced.
The Hidden Architecture Behind Smart AI
Precisely's Data Link initiative is a project designed to simplify how organizations connect and manage multiple data sources. Ultimately, it serves as a translation layer that standardizes and synchronizes external information into a single, consistent stream of data.
Bell said:
"Customers shouldn't need to care where the data comes from. They should be able to plug in a business ID or a location and get everything they need-risk profiles, demographic overlays, infrastructure data-without doing 10 separate joins."
This is where database integrity becomes a tangible advantage in the real world. By managing IDs and context centrally, organizations avoid duplication, misalignment, and blind spots. Data Link turns fragmented information into unified intelligence – a foundational need for scalable AI.
Natural Language Meets Enterprise Data
As organizations grow more comfortable with AI, they want their data systems to work more like the AI interfaces they already use. The goal is to be able to ask a question and receive a reliable, data-backed answer. Precisely is working on exactly that.
By layering natural language interfaces over its datasets and connecting them via tools like Claude or Model Context Protocol, it enables a new type of interaction. One that's intuitive, fast, and powerful. That's the promise of combining AI data quality with user-friendly interfaces.
It's hoped that this approach will continue to reduce dependence on engineers or data scientists to mediate between business users and raw data. It opens up access and speeds up time-to-value. In a market where speed and accuracy increasingly determine competitiveness, that's no small win.
The Bottom Line
What has become clear in conversations like this is that AI maturity isn't about having the latest model or the most enormous compute budget. It's about trust. And trust, as Bell reminded us, starts with data integrity.
"People are starting to realize they've run their AI pilots, they've hit the limits, and now they're asking the right questions," Bell said. "They want AI that works. And that means AI built on data you can trust."
Without data integrity, AI will fail and confuse instead of clarifying. But if done right, data enrichment, data consistency, and data context aren't just supporting actors. They're the stage on which AI performs.
FAQs
Why is data integrity critical to the success of AI strategies?
AI depends on accurate, complete, and consistent data. Without it, AI makes decisions based on flawed inputs, resulting in unreliable or biased outcomes. Good outcomes start with trusted data.
Why is data integrity important?
Poor data delivers AI hallucinations. This leads to poor decisions and a loss of trust. If you don’t address how to ensure data integrity, your business will risk regulatory issues, customer dissatisfaction, and models that can’t be explained or trusted.
How can third-party data enrichment improve AI outcomes and business value?
Data enrichment adds missing context, such as location, risk, or demographics. This improves AI data quality and helps businesses make faster, more accurate decisions that deliver measurable ROI.
