There is a number that should keep every C-suite executive in the region awake at night. According to MIT research published last year, ninety-five per cent of enterprise AI initiatives fail to move beyond the pilot stage. Not fifty. Not seventy. Ninety-five.
I have spent the better part of two decades watching organisations attempt digital transformations of various kinds. The pattern is remarkably consistent. A leadership team gets excited about a new technology. Budgets are allocated. Vendors are selected. Pilots are launched with great fanfare. And then, somewhere between the proof of concept and the production deployment, the whole thing quietly dies. The vendor gets blamed. The technology gets blamed. Occasionally, the CIO gets blamed.
But the technology is rarely the problem. The problem is that organisations try to learn something new without first unlearning something old.
The First Law: Unlearn the Myth of the Silver Bullet
Every few years, a technology comes along that the market declares will solve everything. Cloud was going to eliminate IT costs. Blockchain was going to eliminate intermediaries. Now AI is going to eliminate inefficiency itself. The pattern is always the same: a genuine technological breakthrough gets inflated into a universal solution, and organisations rush to adopt it without asking the most basic question—what specific problem are we solving?
I sat across from a CEO in Dubai last quarter who told me, with complete sincerity, that he wanted to "implement AI across the organisation." When I asked him what that meant in practice, there was a long silence. He had been told by his board that competitors were investing in AI, and he needed to do the same. The strategy, in its entirety, was "do AI."
This is not a strategy. It is a reaction. And reactions, by definition, are not governed. They are not measured. They do not produce sustainable outcomes.
The first law of digital unlearning requires organisations to abandon the belief that any technology—including AI—is inherently transformative. Technology is an accelerant. It accelerates whatever is already there. If your processes are sound, your data is clean, and your people are aligned, AI will accelerate your advantage. If your processes are broken, your data is fragmented, and your people are resistant, AI will accelerate your dysfunction. At greater speed and at greater cost.
The Second Law: Unlearn the Separation of Strategy and Governance
In most organisations I encounter, strategy and governance live in separate rooms. The strategy team dreams up ambitious AI roadmaps. The governance team writes policies that constrain them. The two groups meet occasionally, usually when something has already gone wrong, and the conversation is adversarial by design.
This separation is a relic of an older model of technology adoption, where the risk profile of a new system could be assessed once, at the point of deployment, and then monitored passively. AI does not work this way. A machine learning model that performs perfectly in testing can develop bias in production. A generative AI system that produces accurate outputs today can hallucinate tomorrow. The risk is not static. It evolves. And governance that is separated from strategy cannot evolve with it.
The second law demands that governance be embedded in the strategy itself—not bolted on afterwards. We use the metaphor of brakes on a car deliberately. Brakes do not exist to stop the car. They exist to allow the driver to go faster with confidence. A governance framework that is designed alongside the AI strategy does not constrain innovation. It enables velocity. It gives the board confidence to approve larger investments. It gives regulators confidence that the organisation is acting responsibly. It gives customers confidence that their data is being handled with care.
In the UAE, this is not a theoretical concern. The Personal Data Protection Law is maturing. The EU AI Act, which affects any organisation doing business with European entities, is now in force. The DFSA is actively surveying AI adoption within the DIFC. Organisations that treat governance as a separate workstream from their AI strategy are building on foundations that will not hold.
The Third Law: Unlearn the Outsourcing of Capability
This is the one that makes consulting firms uncomfortable, and I say that as someone who runs one.
The traditional consulting model is built on dependency. The longer the engagement, the more revenue it generates. The more complex the deliverable, the harder it is for the client to maintain it independently. This model has worked for decades because the knowledge gap between the consultant and the client was wide enough to justify it.
With AI, this model is not just outdated. It is actively harmful.
AI is not a system you install and walk away from. It is a capability that must be continuously managed, refined, and governed by the people who understand the business context in which it operates. An external consultant can design your governance framework. They can build your initial models. They can train your teams. But if, twelve months later, your organisation cannot operate and evolve those systems independently, the engagement has failed—regardless of how polished the final presentation was.
The third law of digital unlearning requires organisations to stop outsourcing capability and start insourcing it. This means selecting partners who measure their success by your independence, not by the length of their contract. It means investing in your people—not just in technology licences. It means accepting that the most valuable deliverable from any AI engagement is not a model or a dashboard. It is the institutional knowledge that allows your team to build the next one without picking up the phone.
The Path Forward
The ninety-five per cent failure rate is not inevitable. It is the consequence of organisations applying old mental models to a fundamentally new challenge. The enterprises that will succeed with AI in this region—and there will be many, because the ambition and the investment are both extraordinary—are the ones willing to do the difficult work of unlearning before they attempt to learn.
Unlearn the silver bullet. Define the specific problem before selecting the technology.
Unlearn the separation. Embed governance into strategy from day one.
Unlearn the dependency. Build internal capability that outlasts any engagement.
The organisations that do this will not just deploy AI successfully. They will deploy it sustainably. And in a market moving as fast as the UAE, sustainability is the only competitive advantage that compounds.
