The trust problem is not what you think it is
The question most teams ask about AI is: "Is it accurate?"
That's the wrong question.
The right question is: "Can I verify it?"
Accuracy matters. But a system you can audit – one where you can trace why it said what it said, check it against your own records, and catch when it's wrong – is worth more than a system that's right 95% of the time and opaque about the other 5%.
This is the second transition: Trusting your own data.
A media analytics firm I worked with had fragmented customer IDs across three systems. Their data was technically present – it just didn't agree with itself. Before any AI could work reliably, they had to reconcile those identifiers and establish a single source of truth.
That's not an AI problem. That's a data integrity problem that AI exposed.
A financial services firm saw something different: their AI invoice parser came back at 94.5% accuracy – and in the process, surfaced manual data entry errors that had been sitting undetected for months. The AI didn't just perform. It improved the underlying data it was working from.
This is what good looks like at this level: AI that connects to your proprietary data, produces outputs you can check, and makes the data itself more trustworthy over time.
The practical questions for this transition:
- Where are the authoritative sources of truth in your data?
- What happens when two systems disagree – who wins, and how?
- Could you audit an AI recommendation if you had to?
Teams that skip this step build on sand. The AI looks impressive until a high-stakes moment reveals that nobody fully trusted the outputs, and nobody had a way to verify them.
Where's the data trust problem in your organization? Sometimes it's a single system, sometimes it's a reconciliation nightmare across departments. Reply and tell me what you're working with.
— Will