71% of organizations now use generative AI in at least one business function.
Yet over 80% still see no tangible impact on their bottom line.
Read those two sentences again. That's the entire GenAI debate in 2025 summarized.
Usage is exploding. Value is lagging. And most companies don't understand why.
What Should You Do Differently in 2026?
Here's the uncomfortable truth: High adoption is not the same as high impact.
I see it with my clients. Again and again. The tools have been rolled out. IT approved them. The licenses are paid. But the workflows are the same. Training was a half-day workshop six months ago. Governance is something someone talks about, but no one owns.
McKinsey describes it exactly this way: GenAI is now widespread in many organizations, but most still see no clear EBIT impact.
Gartner estimates that global GenAI spending will hit $644 billion in 2025. Up 76.4% from the previous year. The money is pouring in. But money alone solves nothing.
The companies that actually succeed do something different. They connect the technology to concrete workflows. They redesign work. They train more than once. They have people who validate output before it reaches customers and decisions.
That's why augmentation is a more precise strategy than automation as a default reflex.
The Experimentation Explosion Is Not Enough
Adoption has been wild. 71% use GenAI regularly. The most common areas: marketing, sales, product development, service operations, and software engineering. That's a significant increase in a short time.
But adoption is one thing. Transformation is something entirely different.
BCG has pointed it out repeatedly: most companies are stuck in the pilot stage. Only about 10% use GenAI at scale. There's a huge difference between having many experiments running and actually having changed the way core work gets done.
That's what confuses the debate. You can have high activity, great internal enthusiasm, and plenty of visibility in the organization. Without moving a single dollar on the bottom line.
When I run workshops, I always ask: "Can you name one concrete example where your GenAI usage has changed a core process?" The silence says it all.
Usage Is Not the Same as Value
MIT NANDA puts it sharply: after large enterprise investments, around 95% of organizations still see no measurable P&L impact. Only a small group of about 5% get significant value from their integrated AI pilots.
That's an important point, but it needs to be understood correctly. It's about the fact that most initiatives don't yet create measurable economic impact. It's a maturity problem, not a technology problem.
McKinsey's data points the same way: only 17% of respondents say that 5% or more of their EBIT can be attributed to GenAI.
And here's where the classic mistake happens. When value fails to materialize, companies buy more tools. More licenses. Start more use cases.
The problem is rarely too few models.
The problem is that the technology isn't tightly enough connected to enterprise data, decision rights, compliance requirements, work routines, and employee competencies. MIT NANDA calls it the gap between generic AI adoption and real organizational integration.
That's exactly the gap I spend most of my time helping companies close.
Augmentation Works. Pure Automation Rarely Does.
Some of the strongest evidence comes from customer service. Stanford and MIT studied nearly 5,200 customer service agents at a Fortune 500 company. AI assistance boosted productivity by about 14% overall. For the least experienced, the improvement was around 35%.
Notice what actually happened. The AI didn't replace the employees. It functioned as real-time assistance: suggestions, phrasings, patterns from previously successful interactions.
The AI worked best as a tool that helped people work better, faster, and more consistently.
That's the whole point of augmentation. When AI is used to support employees in concrete work situations, less experienced workers can move up the learning curve faster. That's a completely different logic than the automation fantasy, where the system is assumed to take over work end-to-end.
I teach at ITU, and I see the same pattern with my students. Those who use AI as a sparring partner learn faster. Those who try to let AI do all the work learn nothing.
10/20/70: People Are Still the Main Investment
BCG has argued for years that AI transformation is an organizational problem. Their 10/20/70 principle: about 10% of value comes from algorithms, 20% from data and technology, 70% from people, processes, and change management.
70%.
Think about that next time someone suggests spending the entire budget on yet another tool.
BCG also shows that organizations with a broader transformation effort have significantly better adoption rates than those treating GenAI as an isolated technology project.
MIT Sloan supports this with their EPOCH framework: the human qualities AI struggles most to replicate. Empathy, Presence, Opinion/Judgment, Creativity, and Hope. These are the dimensions of work, relationships, and leadership where humans remain hard to replace. But they can get better with AI support.
Why Pilots Fail
One of the biggest reasons: companies massively underestimate the organizational part of the task.
Only 1% of leaders believe their companies are "mature" in the sense that AI is fully integrated into workflows and driving substantial business results. 1%. When you hold that up against the willingness to invest and the public hype, it's absurdly low.
Training is an obvious problem. BCG reported in 2024 that only 6% of companies had trained more than 25% of their employees. In 2025, only about a third felt properly trained.
Try to imagine that in another context. You roll out a new ERP system but only train a third of the users. No one would accept that. With AI, we do it all the time.
Data is also still a bottleneck. Many organizations have data, but it's not mature enough for safe, consistent GenAI use. When output hallucinates or can't be validated, the problem is often the data context and process discipline it's connected to.
The European Challenge Is Real
Europe still lags behind the U.S. on several key AI metrics. Eurostat reported 13.5% AI adoption among EU companies with at least 10 employees in 2024. That number can't be directly compared to McKinsey's 71% globally (different methods and definitions), but the direction is clear: Europe is growing, but the starting point has been lower.
At the same time, the regulatory reality is more complex. The EU AI Act applies gradually:
- Prohibited AI practices and AI literacy obligations: from February 2, 2025
- Rules for general-purpose AI: from August 2, 2025
- The majority of remaining rules: from August 2, 2026
- Certain rules for high-risk AI in regulated products: from August 2, 2027
The same applies to GDPR. A Data Protection Impact Assessment is not automatically required just because something is "AI." But under GDPR Article 35, a DPIA is required when processing is likely to result in high risk to the rights of data subjects. For many AI applications involving personal data, this means the legal and technical assessment needs to come in early. Not afterward.
It's easy to see regulation as a disadvantage. But it can also become a competitive advantage.
European companies that learn early to build AI solutions with strong documentation, governance, human oversight, and data protection can be in a stronger position when customers and authorities start demanding higher standards. Speed alone is rarely the decisive advantage in the long run.
Your Action Plan for 2026
If 2024 was the year of experimentation and 2025 was the year of the reality check, 2026 should be the year you invest seriously in people, processes, and work design. Technology matters. Everything surrounding it matters more.
Here are seven things you can do now:
1. Make AI a Leadership Responsibility
If AI still lives in the IT department, you have a problem. The organizations creating the most value have clear leadership ownership with defined goals and governance. Put someone on the leadership team who owns the AI strategy. Give them mandate and budget.
2. Choose Fewer Use Cases, and Choose Them Harder
Stop starting 20 pilots at once. Find the 2-3 use cases that are tightly connected to core processes and have measurable business goals. Go deep with them. That's where value comes from.
3. Use 10/20/70 as a Guiding Principle
Look at your AI budget. How much goes to technology? How much to people, processes, and change management? If the split is 70/20/10 instead of the reverse, you've found part of the explanation for why impact isn't materializing.
4. Design for Augmentation
Don't ask "what can we automate?" Ask "where can AI make our people better?" The best gains come when AI supports employees in concrete work situations. Remove the human too quickly, and you lose context, quality, and learning.
5. Train Far More Than You Think Is Necessary
When only a third of employees feel properly trained, it's no wonder adoption is uneven and "shadow AI" is growing. Training is not a one-time event. It's an ongoing investment. Build it into onboarding, team rituals, everyday work.
6. Keep Human Validation as a Design Principle
In knowledge-intensive and regulated workflows, human oversight is part of the quality system. It's the companies with clear processes for human validation that create the most value. Friction isn't always bad. Sometimes it's what prevents a catastrophe.
7. For European Companies: Build Compliance in from Day One
AI Act, GDPR, documentation, risk assessment, and governance should be part of the design from the start. Try to add it afterward, and it's too late and too expensive.
Conclusion
GenAI's problem in 2025 was never too little adoption. The problem was that adoption was confused with transformation.
Many companies got the tools in. Too few changed their work, data foundations, training, governance, and decision processes at the same pace.
Augmentation is a stronger strategy than uncritical automation. When AI is used to elevate people's judgment, speed, quality, and learning in real work situations, the chance of lasting value is far greater.
2026 should be the year companies invest less in the illusion of full autonomy and more in making their people, processes, and governance models stronger with AI.
It starts with a simple question: Who in your organization actually owns the AI strategy?
If the answer is "no one" or "it's a bit unclear," you have your first task for 2026.
Have questions about AI strategy for 2026? Contact me – I'm happy to help you get started.