For the last couple of years, artificial intelligence has lived in a comfortable place. Not quite experimental, not quite accountable…
It sat alongside innovation roadmaps, future-facing decks and internal pilots with reassuring labels like “exploration” and “learning phase”. Results were interesting. Progress was promising. Outcomes were… hard to pin down.
I think that cushion is disappearing.
As we head into 2026, AI is being pulled out of the innovation wing and seated somewhere far less forgiving: next to operating costs, headcount decisions and quarterly forecasts. The conversation is shifting from possibility to performance.
Not “what could this do one day?”
But “what did this actually change?”
That shift has consequences for how brands build, buy, govern and talk about intelligence that isn’t human.
When Excitement Becomes Expectation
Early AI rollouts were fuelled by novelty. Teams were impressed by speed alone. Something that took hours now took seconds. Something that felt complex suddenly felt accessible.
But novelty has a short shelf life.
Once the surprise wears off, AI gets judged like everything else in a business. Does it reduce effort or just move it around? Does it lower risk or create new, quieter risks that surface later? Does it compound capability or quietly deskill people?
In 2026, most organisations will stop accepting “potential” as an answer. Leaders will want to see patterns over time, not isolated wins. They’ll want to know what changed six months after deployment, not six minutes after the demo.
This is uncomfortable for teams that championed AI early, because belief has to give way to evidence. But it’s also healthy. Tools that matter should survive measurement.
The smartest organisations won’t wait to be audited by finance or procurement. They’ll define success on their own terms first, deciding what gets replaced, simplified or removed when AI is introduced. Otherwise, it just piles on top of existing work and becomes impossible to defend.
Ownership Becomes Part of the Brand Story
For years, infrastructure choices lived safely below the surface. Customers rarely cared where systems ran, as long as they worked.
I feel that AI changes that.
When decision-making, recommendations, approvals or assessments are driven by models trained elsewhere, on data few people fully understand, questions of control stop being abstract. They become reputational.
I’m sure that boards are already starting to ask uncomfortable but necessary questions. Where do these models live? Who can access them? What happens if regulations change or suppliers shift direction? Can we explain this AI-led product or service to a regulator or a customer without talking shite?
For organisations built on trust, vague answers won’t hold. Banks, healthcare providers, insurers, public services and any brand handling sensitive data will be expected to show not just what their AI does, but where it belongs.
In many cases, this will mean trading a bit of raw capability for clarity and control. Fewer black boxes. More explicit boundaries. A clearer sense of what is owned versus rented.
The brands that get ahead of this will be able to speak plainly about their AI supply chain, rather than scrambling to explain it when someone else frames the story for them.
Smaller Systems, Tighter Focus
The loudest AI headlines still revolve around scale. Bigger models. Broader abilities. Fewer limits.
But the most effective applications emerging inside organisations look almost modest by comparison.
They are narrow. Purpose-built. Fed with carefully chosen data. Embedded directly into how work already happens.
A system trained deeply on one company’s knowledge, processes and edge cases will usually outperform a general model that knows a little about everything but nothing about how that organisation actually operates.
This points to a shift in where value lives. Less in the model itself, more in how it’s shaped, constrained and connected to real workflows. Context becomes more important than raw intelligence.
For digital strategy and brand teams, this is good news. It means AI can reflect how you work, not force you to conform to how a vendor imagined work should happen. It also makes differentiation possible again, after a period where everyone seemed to be using the same tools in the same way.
Synthetic Media Stops Being a Toy
AI-generated video, voice and imagery have spent their early years oscillating between impressive and unsettling. Much of it was shared because it was strange, not because it was useful.
That’s changing.
Behind the noise, teams have been using these tools pragmatically: internal training, product explainers, pitch materials and regional adaptations that would never justify traditional production costs.
In 2026, this kind of use will feel normal. Not everywhere, and not for everything, but enough that refusing it outright will feel impractical rather than principled.
At the same time, the risks become impossible to ignore. Misuse of likeness. Unapproved content. Synthetic material and ads featuring models with three eyes escaping into the wild faster than corrections can follow.
The organisations that navigate this well won’t be the ones making dramatic declarations. They’ll be the ones quietly putting governance in place early: clear consent rules, proper asset management, legal involvement before problems arise, not after.
AI media won’t be banned. It will be managed.
Explainability Stops Being Optional
There are environments where “the system is very complex” has never been an acceptable explanation. AI is now entering those environments at scale.
Healthcare, finance, law, public services. Anywhere decisions carry real-world consequences.
In these spaces, output quality alone isn’t enough. People want to understand how conclusions were reached, how sensitive they are to change and where the system’s blind spots might be.
What’s new is that these expectations are spreading beyond regulators and specialists. Procurement teams, compliance officers and senior leaders are starting to ask vendors harder questions about transparency and accountability.
For brands, this creates a clear divide. Some will double down on opaque tools and hope scrutiny stays light. Others will invest in systems that can be explained well enough to stand up in front of customers, auditors and even courts.
Trust will increasingly favour the latter.
Seeing the Impact on Work in Real Time
Debates about AI and jobs have been long on opinion and short on data. That imbalance is beginning to correct.
We’re starting to see more granular insight into how AI affects specific tasks, roles and career paths. Not years later, but close to real time.
This changes the nature of responsibility. Leaders won’t be able to claim ignorance when patterns emerge. They’ll see where entry-level roles are thinning, where productivity gains are real and where supposed efficiencies are being swallowed by new layers of checking and oversight.
Some will use this visibility to accelerate cost-cutting. Others will treat it as an early warning system, investing in training, redesigning roles and creating new progression paths.
From a brand perspective, silence won’t be neutral. If AI reshapes work under your banner, people will expect you to have a point of view on what that means and how you’re responding.
When People Don’t Wait for Permission
Large organisations move slowly for good reasons. AI tools do not.
When official pathways lag, individuals route around them. They adopt tools quietly. They experiment on their own time. They solve immediate problems without waiting for a steering group.
In 2026, this gap between formal approval and informal use will widen in many sectors. That creates risk, but it also reveals something important: where existing systems are failing the people meant to use them.
Organisations can respond with tighter controls and blanket bans, or they can treat this behaviour as signal rather than rebellion. The latter approach requires faster decision-making and a willingness to bring useful tools inside the tent properly.
Customer-facing teams will feel this pressure most. If official systems lag too far behind, brand interactions will be shaped by tools the organisation neither chose nor understands.
The Quiet Question Underneath It All
Beneath the metrics, governance and infrastructure debates is a subtler issue that will define the next phase of AI.
What does working with these systems do to people over time?
Some tools make users sharper. Others make them passive. Some encourage learning and reflection. Others encourage dependence and agreement.
As AI becomes embedded in education, wellbeing, productivity and everyday decision-making, these effects matter more than raw efficiency. A brand that ships something clever but leaves its users less capable a year later has not really created value.
The strongest products and services will be designed with this in mind. Not just to keep people engaged, but to leave them better equipped to operate without the system if needed.
Preparing for a More Serious Phase
None of this suggests retreat. It suggests focus.
Organisations heading into 2026 should be asking simple, demanding questions. What problem is this actually solving? Where does it live? Who is accountable for it? What changes if it works? What stops if it doesn’t work?
The period from 2023 to 2025 forced attention. It was loud, uneven and sometimes absurd. But it made AI unavoidable.
The next phase is quieter. More sober. More grounded in trade-offs.
AI is no longer auditioning. It’s been hired. Now it has to perform, explain itself and earn its place alongside everything else that competes for trust, budget and belief.
That’s not a threat to good digital strategy or strong brands.
It’s the moment when both can finally grow up.
Get in touch with Friday sometime in 2026 if you want your digital strategy infused with AI thought leadership.

