Skip to Content

Beyond Incremental Gains: Why AI Demands We Reimagine Work, Not Just Improve It


Recently, I had the opportunity to exchange views and listen to Perth business leaders at a CEO Institute meeting. Listening at all my fellows, I realised the room was divided. Not openly, not dramatically, but in that particular way that business leaders telegraph disagreement through careful word choices, through pauses that speak volumes, through the gap between what's said and what's left unsaid. 

We were discussing AI's impact on our organisations, and the divide was stark. On one side sat those who spoke of AI as the next great revolution, a force that would reshape everything we thought we knew about work, productivity, and value creation. On the other side were the skeptics, executives who'd seen too many 'transformative' technologies come and go, who remembered when big data was going to change everything, when the cloud was revolutionary, when mobile was the future. To them, AI was simply the latest in a long line of expensive gadgets dressed up in revolutionary rhetoric.

I listened, observed, and recognised something familiar. This wasn't really a debate about technology at all rather a debate about imagination, about whether we were witnessing incremental improvement or fundamental rupture.

This morning, I looked at my daily French press review from my friend Alberic. There, in David Barroux's column in Les Échos, was the articulation of precisely what that Perth boardroom couldn't quite grasp. Barroux wrote: 'AI is a major revolution insofar as the progress it is supposed to enable is likely to profoundly change the situation on many fronts. This is not an innovation that will allow incremental improvements but rather genuine disruptions.'

Barroux argued that AI represents a major revolution because the progress it promises centres on genuine ruptures rather than incremental improvements. His key insight followed: 'If AI lives up to its promises, its potential is not to allow us to do a little better but to do differently and therefore, potentially, much better.'

When AI delivers on its promises, its potential lies not in helping us do things a little better but in enabling us to do things differently, and therefore, potentially, much better. I put down my coffee. That was it. That was the divide in that boardroom, and it's the divide I'm seeing everywhere I look.


The Incremental Trap


The problem reveals itself in the questions being asked. Most organisations are asking the wrong question. They ask: 'How can AI make us 15 percent more productive?' or 'Can AI reduce our processing time by 20 percent?' or 'Will AI help us respond to customers faster?'

These aren't bad questions. They're simply small questions. They ask about optimisation rather than transformation. When you ask optimisation questions, you inevitably get optimisation answers. You'll deploy AI to do essentially what you're already doing, just marginally better.

The organisations treating AI as a revolution are asking fundamentally different questions: 'What becomes possible if we reimagine this process entirely?' or 'If we weren't constrained by how we've always done this, what would we do instead?' or even more radically, 'Should we be doing this at all?'

This distinction matters more than any technical specification, any model capability, any vendor promise. AI's real power lies not in making bad processes slightly less bad but in revealing that we might not need those processes at all.

Consider what's happening across the world. In February 2025, the France AI Action Summit brought together representatives from over a hundred countries. One of its five core working groups focused specifically on the 'Future of Work'. Not 'productivity enhancement in the workplace'. Not 'AI tools for efficiency'. The Future of Work. That language matters because it signals an understanding that something fundamental is shifting.

Meanwhile, Australia's Jobs and Skills Australia released a landmark study in August 2025 examining generative AI's impact on the workforce. Their conclusion? AI will augment rather than replace human work. What I find most telling about this study goes beyond the headline. The research emphasises that this augmentation will fundamentally reshape how we work, requiring new skills, new organisational structures, new ways of thinking about value creation. The same job with better tools? No. The job itself evolving into something different.

In Canada, ADP's 2026 Workplace Trends report reveals organisations struggling to balance 'innovation with human-centred practices'. That struggle is instructive, suggesting that companies know something fundamental is changing, but they're not quite sure how to navigate it. The pull of transformation meets the comfort of familiarity.

And perhaps most striking: research from the World Economic Forum indicates that while nearly all companies are investing in AI, only one percent believe they've reached maturity in their AI implementation. One percent. Despite the investment, despite the pilots, despite the proof-of-concepts, ninety-nine percent of organisations know they're still at the beginning of understanding what this technology means for them.

Why? Because the barrier isn't technological. The barrier is conceptual: the inability to move from optimisation thinking to transformation thinking.


The Values Question Nobody's Asking


Here's where this gets personal for me. I've spent the last several months thinking deeply about AI ethics and governance,not as abstract principles, but as practical frameworks for organisations implementing these systems. That work has led me to develop an approach centred on five core values: transparency, accountability, human dignity, participation, and equity. You can read more about this framework on my site. 

But what I'm discovering, both in my consulting work and now in an exciting new project I'm involved with here in Perth through TRICORE TECH, is that most organisations aren't starting with values. They begin with capabilities. They ask 'What can AI do?' before asking 'What should AI do?' or 'How should AI change what we do?'

This matters because transformation without values becomes disruption for its own sake. And disruption without values tends to concentrate power, diminish agency, and widen existing inequalities. We've seen this pattern before with previous technological revolutions. We don't have to repeat it.

I can't say too much about the TRICORE TECH project yet (we're still in the early days) but I can tell you this: being involved in a genuine AI implementation from the ground up, helping shape not just what we build but why we build it and how we build it ethically, is exactly the kind of work that excites me. Driven by conviction rather than compensation, this volunteer work reflects my genuine belief that we're at a pivotal moment. We get to help define what responsible AI transformation looks like. That's not just intellectually interesting, it carries moral urgency.


What Real Transformation Requires


So what does it actually look like to move from incremental thinking to transformational thinking? Based on my work with organisations across different countries, I've come to believe it requires answering five uncomfortable questions honestly. 

The first is the "Honesty Question": are we truly open to doing things differently, or do we just want to do the same things faster? Most organisations will say they want transformation, but their actions reveal a preference for optimisation. They want to digitise existing workflows, not question whether those workflows should exist. Real transformation begins with intellectual honesty about what we're actually willing to change.

The second is the "Agency Question": how do we ensure that those whose work will be transformed have voice and choice in that transformation? This extends beyond ethics (though ethics absolutely matter here) to effectiveness itself. The people doing the work often understand its complexity, its pain points, its possibilities far better than those designing the AI systems meant to transform it. Transformation imposed from above tends to miss crucial context. Transformation designed participatively tends to work better. 

The third is the 'Values Question": what principles are non-negotiable as we implement AI? For me, these are transparency, accountability, human dignity, participation, and equity. For your organisation, they might be different. But here's what matters: identifying them before you start, not after things go wrong. Values don't constrain innovation. They provide guardrails that allow you to innovate boldly without causing harm.

The fourth is the "Capability Question": are we investing in human development as much as technological deployment? I see organisations spending millions on AI platforms while offering employees a two-hour training session on "how to use ChatGPT". That approach lacks seriousness. If AI will genuinely transform work, then workers need deep, sustained investment in developing the skills to work effectively alongside these systems,and equally important, the skills to do the distinctly human work that AI can't replicate. Critical thinking. Emotional intelligence. Ethical reasoning. Creative synthesis. These matter more in an AI-augmented workplace, not less. 

The fifth is the "Courage Question": are our leaders willing to make decisions that might disrupt short-term metrics for long-term transformation? This is perhaps the hardest question because it asks leaders to act against their incentive structures. Transformation often means short-term productivity dips as people learn new ways of working. It means questioning processes that currently 'work' even if they're not optimal. It means admitting that what got you here won't get you there. That takes courage, and courage is always in shorter supply than capital.


The Global Reality Check


Let me ground this in what's actually happening right now, across different contexts and cultures, because I think the patterns are instructive.

In France, the commitment to understanding AI's workplace impact is serious and sustained. The February 2025 AI Action Summit wasn't a talking shop, rather, a structured effort involving government leaders, civil society, private sector representatives, and academic researchers from over a hundred countries, all focused on concrete actions. The 'Future of Work' track specifically emphasised the need to 'enhance shared knowledge on AI's impacts in the job market' and to 'better anticipate AI implications for workplaces, training and education'. This represents systems-level thinking about transformation, not tactical thinking about tools.

Australia offers a fascinating counterpoint. The Jobs and Skills Australia study found that while AI will likely augment rather than replace jobs, the transformation will be neither automatic nor equitable. Older workers, First Nations Australians, and people with disabilities face disproportionate risks due to 'occupational concentration and digital access gaps'. Women-dominated occupations show higher automation exposure. The technology itself hasn't failed here. Rather, this reminds us that transformation always has distributional consequences, and without deliberate intervention, those consequences tend to reinforce existing inequalities rather than remedy them.

In Canada, organisations are grappling with what it means to keep work 'human-centric' in an AI-driven world. The ADP Canada research revealed that less than half of employers rate their onboarding and hiring processes as highly efficient, and more than half lack confidence in capturing employee feedback or understanding employee sentiment. Now add AI to that context. You can see the challenge: organisations that haven't mastered the human dimensions of work are now trying to figure out how to integrate technology that will fundamentally reshape those dimensions.

And in Europe more broadly, research conducted for the European Parliament found that a quarter of workplaces are already using algorithms or AI to automate decisions traditionally made by managers, work scheduling, task allocation, performance evaluation, even recruitment. That percentage is expected to grow rapidly over the next decade. This is transformation happening in real time, often without the values frameworks, the worker participation, or the ethical oversight that should accompany it.

What strikes me across all these contexts is the gap between awareness and action. Everyone knows something significant is happening. Yet the path from awareness to genuine transformation remains treacherous. Transformation that respects human dignity, involves workers in design decisions, and builds on rather than undermines social equity. This kind of transformation is happening slowly, unevenly, and often poorly.


Why Most Transformations Fail


Here's what I'am learning from watching organisations attempt AI transformation: the failures aren't technical. I've yet to see an AI transformation fail because the algorithms weren't sophisticated enough or the computing power wasn't sufficient. The failures are human, organisational, cultural.

They fail because leadership can't articulate a compelling vision beyond 'we need to use AI or we'll fall behind'. Vision requires more than buzzwords. It demands painting a picture of what becomes possible, what problems we can solve, what value we can create that we can't create now. Without that, AI implementation becomes a compliance exercise, something we do because everyone else is doing it, not because we understand its transformative potential. 

They fail because organisations treat AI as a technology project rather than a change management challenge. They staff it with data scientists and engineers but not with organisational psychologists, not with change management specialists, not with ethicists. They optimise for technical sophistication while underinvesting in the human dimensions that determine whether the technology actually gets used, and used well.

They fail because there's no psychological safety for experimentation. Transformation requires trying things, having some of them fail, learning, and trying again. But most organisational cultures punish failure, or at best, tolerate it grudgingly. You can't transform under those conditions. You can only optimise, because optimisation is predictable. Transformation, by definition, ventures into uncertainty.

They fail because they mistake deployment for adoption. Just because AI tools are available doesn't mean people use them effectively, or use them at all. I've seen organisations celebrate successful 'AI rollouts' while usage data reveals that only a fraction of employees engage with the tools, and those who do often use them in superficial ways that capture little of their potential.

And perhaps most fundamentally, they fail because they're not actually trying to transform. They're trying to optimise while using the language of transformation. They want the prestige of being 'AI-forward' without the disruption of genuine change. They want transformation's benefits without transformation's costs, the costs in time, in resources, in uncertainty, in the discomfort of questioning established practices.


The Revolution Is Here


Let me return to that CEO Institute discussion, because I think it captures something important about where we are right now.

The divide in that room wasn't really between optimists and skeptics about AI. It was between those asking 'How much faster?' and those asking 'How fundamentally different?' The first question leads to marginal gains. The second might lead to revolution, but only if we have the courage to mean it, and the values to guide it.

Because here's what I couldn't say in that boardroom but can say here: some of the jobs and processes you're trying to make 'AI-enhanced' probably shouldn't exist at all. AI's real gift lies not in making bad work more efficient but in revealing that the work was unnecessary in the first place. The technology shows us where we've built elaborate systems to manage problems that emerged from other elaborate systems, where we're doing things because 'that's how we've always done them', not because they create genuine value.

But you can only see that if you're willing to ask uncomfortable questions. Most aren't. That's why the revolution will happen to them, not with them.

The organisations that thrive won't be those with the most sophisticated AI. They'll be those who understand that AI is giving us permission to reimagine everything. They'll be those who start with values, who bring workers into design decisions, who invest in human capability as seriously as they invest in technology, who have the courage to pursue transformation even when optimisation would be easier.

And they'll be those who understand that in an AI-transformed world, the values that matter most (respect, fairness, participation, dignity, equity) matter more, not less. These values don't constrain what we can do with AI. They provide the foundation for doing it well.


 Your Choice, Right Now


So here's my challenge to you: Go back to your organisation and examine your next AI initiative through this lens. Ask yourself, and ask your team: If this succeeds beyond our expectations, will it have made us incrementally better at what we do, or will it have freed us to do something we couldn't do before?

If the answer is just 'incrementally better', you're spending transformation money on optimisation problems. That might be fine: optimisation has value. But call it what it is. Don't confuse it with transformation.

And if you genuinely want transformation? Then start by asking those five uncomfortable questions: Are we being honest about what we're willing to change? Are we giving agency to those whose work will be transformed? What values are non-negotiable? Are we investing in human capability? Do we have the courage to disrupt ourselves?

The revolution isn't coming. The revolution is here. The question is whether you'll help shape it or simply endure it. Whether it will happen with you or to you. Whether it will be guided by values or just by velocity.

I'm choosing to be in the middle of it, through my work with TRICORE TECH and through the broader thinking I'm doing about AI ethics and governance. The work is hard. It raises more questions than it answers. It requires confronting uncomfortable truths about power, about change, about what we're willing to sacrifice and what we must preserve.

But I can't imagine more important work right now. We're not just implementing technology. We're reimagining work itself. And if we do that with wisdom, with values, with genuine respect for human dignity and agency, we might not just do things differently.

We might do them much, much better.


The Digital Uprising

Screen Printing Project 2025 - ArnaudAgency 

Arnaud Couvreur 17 November 2025
Share this post
Why I Signed the Statement on Superintelligence
Balancing Innovation with Responsibility in the Age of Transformative AI