The meeting was supposed to be a celebration. I had spent two weeks building a PowerBI dashboard; clean visualisations, drill-down capabilities, everything colour-coded just right. Today was the big reveal to the team.

I clicked through the first three pages of the report with visible pride. Then came the questions.

"What's the source of this data?"

"I'm seeing different numbers in my reports. Is this the full picture?"

"Wait, my department name is wrong here."

Within fifteen minutes, the conversation had shifted from "how do we use this" to "can we trust this." Two weeks of careful work landed in the purgatory of nice-looking but unreliable tools. Another dashboard destined to be opened once, then never again.

The Pattern Nobody Talks About

When Business Intelligence tools arrived, we were going to have a "single source of truth." What we got instead was a myriad of Excel files—each department maintaining its own version of reality. The BI dashboards existed, technically. Nobody opened them.

Then came Robotic Process Automation. RPA was going to eliminate repetitive work and free people for higher-value tasks. In practice, we automated processes nobody had questioned in years. The bots dutifully executed seventeen unnecessary steps, faster than any human could. We didn't streamline work; we institutionalised waste at machine speed.

It is easy to blame technology: "BI didn't deliver ROI", "RPA was overhyped", but the real problem was never the technology. It was what we pointed it at.

I've been guilty of this too. Early in my career, I was excited about every new platform and capability. It took watching the same pattern repeat three times before I understood: the technology was never the constraint. Our willingness to do the hard thinking was.

Why AI Is More Dangerous

Now we have AI. And the pattern is repeating, except this time the stakes are categorically higher.

Previous technologies had a kind of built-in friction that forced at least some confrontation with underlying problems. To automate a process with RPA, you had to map it explicitly, step by step. The mapping exercise sometimes revealed absurdities—approval chains that looped back on themselves, data that passed through six hands before anyone actually used it. Not always, but sometimes.

AI doesn't require that confrontation. Large language models can figure out your messy process without you ever having to articulate it clearly. They're remarkably good at pattern-matching their way through chaos. They don't complain. They don't ask why. They execute.

And they do it with confidence.

That confidence is the trap. When an AI system produces outputs that look polished and professional, built on data nobody trusts and processes nobody examined, the dysfunction doesn't disappear. It scales. It gets wrapped in polished prose and sent to executives who assume the logic has been verified.

The research backs this up. According to MIT's GenAI Divide report, only about 5% of business AI initiatives generate meaningful value. Gartner's 2025 Hype Cycle found that companies spend an average of $1.9 million on generative AI initiatives, yet less than 30% of AI leaders report that their CEOs are satisfied with the results.

Meanwhile, individual workers quietly report saving 2 hours a week by using AI tools for small tasks—drafting emails, summarising documents, automating minor friction points. Those modest, unglamorous gains are where the real value lives. But they don't make for exciting board presentations.

The Question You're Probably Not Asking

You're a leader navigating AI adoption. You feel two pressures pulling in opposite directions. Move fast, as competitors are deploying, the board wants updates, and consultants are circling. But something feels off. The foundation is shaky. Data quality conversations keep getting deferred. The processes you're asked to "AI-enable" haven't been examined critically in years.

That uneasy feeling isn't resistance. It's pattern recognition.

The leaders who succeed with AI won't be the fastest movers. They'll be the ones who understand what they're amplifying before they hit the "accelerate button".

So if you are about to kick off a new "AI" initiative, ask yourself these four questions:

  1. If the AI failed tomorrow, would anyone trust the underlying data enough to make decisions manually? If the answer is no, you're not ready. You're building on sand. The AI will mask the distrust for a while, producing outputs that look authoritative. But the first time those outputs produce something unexpected, the old doubts will resurface with renewed vigour.

  2. Has anyone mapped the process you're about to accelerate? Has anyone challenged whether it should exist at all? Not documented it. Questioned it. There's a difference. Documentation preserves; questioning transforms. If the process exists because "that's how we've always done it," you're about to scale up institutional inertia.

  3. Are you measuring success by deployment speed or by value delivered? This one sounds obvious, but look at your actual metrics. Look at what gets celebrated in leadership meetings. If the conversation is about how fast you launched rather than what changed afterwards, you're optimising for optics.

  4. What happened to your last three transformation initiatives? Can you name them without embarrassment? If they're sitting abandoned somewhere, technically live but practically ignored, what makes you confident this one will be different? New technology doesn't change organisational patterns. Honest confrontation with those patterns does.

These are pressure tests. Your honest answers will tell you what work needs to happen before the AI work.

The Reframe

Here’s what I want to offer, especially if you’re feeling the pressure to move fast while sensing that the foundation isn’t there: slowing down to build readiness isn’t blocking progress. It’s protecting the investment.

There’s language that helps here. Instead of saying “our data isn’t ready”, which sounds like an excuse, try “we’re investing in data integrity so our AI investments actually pay off.” Instead of “we should fix the process first”, which sounds like stalling, try “we’re ensuring we automate value, not waste.”

The shift is subtle but meaningful. You’re not the person saying no to AI. You’re the person saying yes, and here’s how we make it actually work.

Because the alternative, rushing to deploy, building on dysfunction, discovering the problems only after they’ve scaled, is far more expensive than a few extra months of foundation work. It’s costly in dollars, yes. But it’s also pricey in credibility. Failed AI initiatives don’t just waste budgets; they poison the well for future attempts. People remember. They become cynical. “Remember when we tried AI for that reporting project? What a mess.”

The leaders who get this right tend to share a few characteristics. They’re honest about what they don’t know. They’re willing to do the unglamorous work of process examination before the glamorous job of technology deployment. They measure quietly and adjust constantly rather than declaring victory prematurely.

They understand that AI is an amplifier. And amplifiers don’t care whether they’re amplifying a signal or noise.

The Uncomfortable Truth

There's a harder version of all this that I've been circling. It's not just that organisations apply AI to broken processes. It's that new technology often becomes the escape hatch from the hard work that the previous technology was supposed to enable.

BI was going to force us to take data quality seriously. Then it got hard, so we moved on. RPA was going to force us to map and optimise processes. Then it got hard, so we moved on. Each shiny new thing provides cover for abandoning the last shiny thing before the difficult middle work was complete.

AI fits perfectly into this cycle and maybe too perfectly. It lets you skip even more of the hard thinking. It doesn't force confrontation. It accommodates chaos.

Which means the choice facing leaders right now isn't really about AI at all. It's about whether you're willing to do the foundational work that every technology wave eventually demands, or whether you'll ride the hype curve to its inevitable trough and wait for the next wave to save you.

The research suggests most will choose the latter. Only 5% generate meaningful value, remember.

But you don't have to be in the 95%.

The leaders who win with AI won't be the fastest. They'll be the ones who know what they're amplifying.

What are you amplifying right now?

Wojciech Pozarzycki, January 2026

You may also like:

No posts found