On April 29, the Albanese government sat government, employer bodies, and unions around the same table to ask a question no one yet has a clean answer to: what actually happens to workers when artificial intelligence enters the room?
Minister Amanda Rishworth has framed the forum around five themes: trust, capability, transparency, safety, and productivity. Reasonable enough. And yet one word on that list is carrying considerably more weight than the others. Transparency. It is the word that looks unambiguous until you try to define it in practice, and then it becomes, suddenly, the whole argument.
Because naming transparency as a theme inside a tripartite forum is not the same as establishing it as a right, and the distance between those two things is where most affected workers currently live.
In practice, transparency only functions as a protection when the person receiving the information has somewhere to take it. A worker told that an algorithm shaped a decision about their future, with no mechanism to question it, has been informed rather than protected. Closing that gap is what the forum has not yet been asked to do.
The Familiar Tension
The forum opened, predictably, amid sharp disagreement. Unions are pushing for enforceable protections; employer groups are warning against what they call premature intervention, citing the EU and Canada as cautionary examples of regulation that deterred investment. Minister Rishworth has positioned herself between the two camps: regulation is coming, she says, but deliberative rather than reactive. She has commissioned a gap analysis to test whether existing workplace laws are actually equipped for AI, or whether something new is required.
The tension itself is not new. France has been living it since the early 2000s, when digital tools entered workplaces at scale and the question of what employers could legitimately monitor, measure, and record became urgent in ways no one had quite anticipated.
MEDEF, CFDT, CGT and the other partenaires sociaux produced real things from those years of argument. The right to disconnect ("droit à la déconnexion"), written into law in 2017, recognised that technology-enabled overwork is a legal problem, not a management preference. Framework agreements on telework, professional training, and data governance followed, and they were worth having.
From my collective bargaining experience in France, the dialogue was only ever as useful as the obligation it eventually produced. Left to itself, tripartism is very good at generating well-drafted reports that no one is required to implement. In most of those negotiations, time could be managed: deadlines shifted, reviews were scheduled, implementation was phased. With AI, it is not. The technology is already inside organisations, reshaping work faster than any forum can meet.
What Europe Built
The EU AI Act classifies AI deployed in employment, performance management, and recruitment as high-risk (not as a theme, but as a legal category with specific obligations attached). Employers using these systems must meet transparency requirements that go well beyond a ministerial statement of intent. Workers have an enforceable right to know when consequential decisions about their working conditions involve automated systems.
France's Comité Social et Économique (CSE) has long held statutory consultation rights over technological changes affecting working conditions. Those rights now apply directly to AI. The CFDT, France's largest union, has gone further still: calling for a mandatory framework agreement on AI in the public sector, and for a formal registry of AI tools within social dialogue bodies, so that worker representatives can actually see what systems are in use, rather than being told that AI is "part of the process."
This consistency in the CFDT's position (from the first digital rights disputes to the current AI debate) is not accidental. When I was appointed as the employers' representative to negotiate collective agreements for the paper industry in France, the CFDT representative I faced across the table was Marylise Léon, now the union's national Secretary General. The issue then was the first chartes informatiques: the early attempts to draw a legal line between what employers could know about workers through digital systems, and what remained private. She argued, with characteristic persistence, for data rights that most employers on our side found premature at the time. The questions were narrower then, the arguments no less fierce for it.
The questions Léon and I argued over in a Paris meeting room are the same ones Australia's forum will eventually need to resolve, at a scale that neither side of that table could have anticipated.
The Blind Spot Australia Cannot Afford
Of everything Minister Rishworth said at the AFR Workforce Summit, the most important line received the least attention. It was not about job displacement. It was about something subtler: work intensification. AI, she observed, may not be eliminating jobs at scale yet, but it is compressing human effort into denser, faster, less forgiving cycles of output. The question she keeps returning to is not whether AI will replace jobs but whether it will make existing ones impossible to sustain at a human pace.
French biologist Olivier Hamant has spent years arguing that systems optimised purely for performance (maximum output, minimum resources) become structurally fragile over time. The biological term is robustness: the capacity to absorb variation and keep functioning, which requires precisely the kind of slack that performance optimisation eliminates. His work was not written with AI in mind, but the diagnosis fits. A workforce compressed by automated systems into ever-tighter output cycles is not becoming more productive in any durable sense; it is becoming more brittle. Rishworth has named the symptom and Hamant's framework explains why it is also a structural warning.
France recognised this pattern before there was vocabulary for it. The "droit à la déconnexion", mentioned above, was not, at bottom, about emails after hours. It was about the legal recognition that technology creates invisible overwork: the boundary between working and not working dissolves when the tools are always with you. Australia is approaching an AI-specific version of the same problem: systems that compress judgment, accelerate every decision cycle, and raise the cognitive demands of each task, without adjusting the hours or pace expected of the people running them.
Safe Work Australia is currently reviewing the occupational health implications of AI-linked work intensification. That review may prove more immediately significant than any legislative framework on the horizon. But a review without a binding obligation (again) is not the same as a right.
A Label Is Not Aspirational
In an earlier piece on this blog, I argued for mandatory AI labelling by drawing on the appellation d'origine contrôlée, France's regulatory system that, when a bottle carries the name Champagne, guarantees it was produced according to a specific and verifiable standard. No producer volunteers that information out of goodwill. The obligation is imposed because the person holding the bottle has no other way of knowing what they are holding.
A worker subjected to an AI-assisted performance review has no current legal entitlement to know the model was involved. A keyboard operator whose role is being quietly wound down by automation has no disclosure protection. An employee assessed for redundancy by a system she has never seen, operating on data she cannot access, has no avenue for challenge. Listing "transparency" as one of five forum themes does not change any of this, and naming an intention is not the same as creating an obligation.
The Business Council representative at the forum cited Europe as a cautionary tale of regulation that deterred investment. That argument conflates two very different things. What burdened European businesses, particularly smaller ones, was poorly designed product liability legislation applied to AI systems across entire supply chains. Workplace transparency is a different instrument entirely: it governs what a worker is told about a decision that affects them, not how a product is certified before it reaches market. A worker who knows an algorithm influenced a decision about their role cannot reverse it on that basis alone, but they can at least contest it on informed grounds.
What I have learned across every position where change was at the centre of the strategy, chosen or forced, is that organisations which kept people informed about decisions that affected them navigated the transition with considerably less friction than those that did not. Workers who understood what was happening and why had something to engage with; those kept in the dark found other channels for their uncertainty, and those channels rarely served anyone well. The case for transparency in the AI workplace is not purely about rights; it goes to whether organisations can build the kind of resilience the transition actually requires. Which is precisely why the question of what the forum eventually commits to matters more than how it chooses to get there.
Australia is right to start through dialogue rather than legislation. France did the same, and some of what came from those years of negotiation was worth keeping. But France is also still working through regulatory gaps it created in the early 2000s by letting dialogue run well ahead of any binding standard. Australia can see those gaps forming. Whether the forum has the political will to do something about them (not in a report, but in what it eventually decides to make enforceable) is the question the next few meetings will begin to answer.
This article builds on earlier pieces published on this blog: AI Transparency: Why We Need Mandatory Labelling Before It's Too Late and The Liability Gap: How Australian Businesses Navigate AI Risks in an Unregulated Frontier.