We Get to Vote on It
Just re-read Daron Acemoglu's "Power and Progress" alongside David Shapiro's work on post-labour futures and Yuval Harari's warnings about "the useless class."
One thing became clearer: We don't actually know for certain whether GenAI will replace jobs at scale. But we do know that right now, we're making choices about who captures the value.
Here's What Matters
The debate often focuses on one question: How many jobs will disappear? But that's probably the wrong question. History shows that technological shifts rarely play out as predicted. Some jobs disappear. Others transform. New ones emerge.
The more pressing question: Who decides how value is distributed when AI makes work more productive?
Right now? Mostly shareholders and platform owners. Not workers. Not society at large.
Augmentation vs. Automation – The Choice We Actually Have
My working hypothesis: Most organizations will get more value from AI as augmentation than pure automation. AI that makes humans more productive – not AI that replaces them.
Data supports this: 78% of organizations are experimenting with AI, but over 80% see no bottom-line impact yet. 95% of GenAI projects fail – and it's rarely about the technology. It's about people, skills, and organization.
My 10/20/70 rule: 10% algorithms, 20% technology, 70% people. That's where the complexity lives.
Harari's Warning – With Nuance
Harari's framing is sharp: What happens when AI makes humans economically irrelevant? Not unemployed – irrelevant.
That's a real risk if we design systems purely for efficiency without considering human agency and purpose.
But "irrelevant" is not a law of nature. It's a design choice. And it requires us to actively choose models where humans remain central.
Europe Has a Window
While US tech runs "move fast, break things, extract value," we can build different rules. GDPR and AI Act are here now. But regulation alone won't close the gap between AI acceleration and responsible implementation.
It also requires:
- Market mechanisms that distribute value more broadly
- Profit-sharing models that actually work
- Employee ownership as norm, not exception
- Portable benefits that follow people, not jobs
- Antitrust with teeth and real data portability
Acemoglu's Core Point
Technology is not destiny. Democratic institutions and competitive markets determine whether innovation benefits everyone – or concentrates wealth among the few.
Harari adds urgency: If we create a world where most people have no economic value, they'll have no political power either. Democracy collapses when citizens become economically superfluous.
The Trade-Off?
This requires regulatory courage and a strong moral compass. And accepting that "efficiency" measured only by shareholder returns isn't efficiency – it's accounting that ignores externalities.
But it also requires humility: We don't know exactly what the AI landscape looks like in 10 years. Maybe fewer jobs disappear than pessimists fear. Maybe more than optimists hope. The uncertainty is real.
My Take
Focus less on "will my job disappear?" and more on "how do we ensure humans capture value, regardless of which way it goes?"
That's a conversation for the boardroom and political institutions. Not as sci-fi scenario planning, but as concrete strategic risk management.
Because if we don't design post-labour economics through democratic choice and fair competition, concentrated power will design it for us.
What's your view – are we having this conversation early enough?
Have questions about AI strategy and the future of work? Contact me – I'm happy to help navigate these complex questions.