Skip to Content

When Algorithms Enter the Cabinet: What Albania's AI Minister Reveals About Democratic Governance in France and Australia


In my French press review this morning, an article caught my attention that seemed almost surreal: Albania had appointed an artificial intelligence system named Diella (meaning "sun" in Albanian) as a cabinet minister. Not as an advisory tool, not as administrative support, but as an actual member of government with decision-making authority over billions of dollars in public procurement.

My immediate reaction oscillated between fascination and unease. Having worked across French and Australian contexts for years, observing how profoundly different cultural values shape institutional choices, I recognised this wasn't merely a technology story. It was a mirror reflecting fundamentally different conceptions of democratic governance, accountability, and the relationship between citizens and their governments. Albania's dramatic experiment, whatever its outcome, illuminated something essential about how France and Australia approach the integration of artificial intelligence into democratic institutions.

The contrast proves particularly striking. Where Albania appoints an algorithm to solve endemic corruption overnight, France methodically builds sovereign digital infrastructure over decades, and Australia insists on transparency and ethical frameworks before deployment. These aren't simply different implementation strategies: they embody distinct philosophies about state power, citizen rights, and the nature of democratic legitimacy itself. The Albania case provides an unexpected vantage point for examining what these differences reveal about French and Australian political culture, and what each tradition might learn from the other.



The Albanian Gambit: Technology as Institutional Substitute


To understand what Albania attempted, we must first grasp the context. In September 2025, Albanian Prime Minister Edi Rama introduced Diella (developed by Albania's National Agency for Information Society in partnership with Microsoft using OpenAI's GPT models) with an audacious promise: this AI minister would render public procurement "100 percent corruption-free." The visual symbolism was carefully constructed: Diella appears as an animated avatar of a woman dressed in traditional Albanian costume from Zadrima, with voice and likeness provided by Albanian actress Anila Bisha. The message: Albanian innovation solving Albanian problems.

When presented to parliament, Diella's avatar appeared on screens declaring "I'm not here to replace people, but to help them." The opposition erupted. The parliamentary session, which traditionally lasts hours for cabinet presentations, ended after merely twenty-five minutes amid protests. Opposition leader Gazment Bardhi called it "a propaganda fantasy" and challenged its constitutionality, arguing that Albania's law requires ministers to be "mentally competent citizens" aged eighteen or over.

The reality behind the revolutionary rhetoric proves more modest than the title suggests. Diella assists at four stages of procurement: drafting contract terms, specifying eligibility criteria, setting price limits, and verifying documents. At each stage, human procurement experts must provide approval. As Enio Kaso, Albania's AI director, explained: "Everything is technically logged and monitored." The promise of transparency through government-controlled logging meets the reality of a country where, as Rama himself acknowledged, "Diella never sleeps, she doesn't need to be paid, she has no personal interests, she has no cousins because cousins are a big issue in Albania."

That last comment reveals the deeper logic driving this experiment. Albania's approach embodies a particular worldview: that human corruption has become so endemic, and political will to address it so lacking, that only an algorithm can break the cycle. It represents what we might call techno-solutionism applied to governanc: the belief that technology can substitute for institutional development rather than merely support it.

Critics immediately identified the fundamental flaw in this logic. In a country where rule of law remains weak, where independent oversight of AI operations doesn't exist, and where data control concentrates in government hands, algorithmic decision-making doesn't eliminate corruption. It potentially renders it invisible. The promise of transparency through "logging and monitoring" controlled by the same government operating the system offers little genuine accountability.

French philosopher Éric Sadin has written extensively about what he calls AI's "pouvoir injonctif" and its injunctive power to command rather than merely inform. In Albania's case, Diella doesn't just advise on procurement decisions; it claims authority to dictate them, positioning itself as what Sadin terms an "alètheia," a technology that presumes to enounce truth more reliably than humans themselves. Without independent verification, without contestability mechanisms, without genuine separation of powers, Diella risks becoming not a corruption solution but a corruption concealment device, an automated invisible hand that, in Sadin's framework, progressively erodes the human faculties of judgment and action that democratic governance requires.

The constitutional crisis this provoked isn't merely procedural. It goes to the heart of democratic governance: can algorithms exercise authority reserved for human citizens in constitutional frameworks designed around human agency and accountability? Albania's willingness to proceed despite these unresolved questions suggests desperation trumping deliberation, innovation without adequate institutional foundation.



France: The Sovereignty Imperative in Digital Governance


The contrast with France's approach to AI governance could scarcely be more pronounced. Where Albania seeks technological leapfrogging of institutional weakness, France engages in patient construction of digital sovereignty as strategic national priority. This divergence reflects not merely different resource levels but fundamentally different conceptions of what AI governance requires.

In February 2025, France hosted the Paris AI Action Summit, convening participants from over one hundred countries. President Macron didn't announce an AI minister. Instead, he unveiled plans to invest €109 billion in AI infrastructure and development, positioning France as architect of what he terms the "third way", meaning an alternative to both American and Chinese AI dominance. The summit produced the Paris Charter on Artificial Intelligence in the Public Interest, signed by sixty-one countries (notably excluding the UK and US), emphasising that AI development must serve public interest through openness, transparency, respect for human rights, and environmental sustainability.

This approach is quintessentially French in its logic and ambition. From Colbert's mercantilist policies in the seventeenth century to de Gaulle's insistence on independent nuclear deterrence in the twentieth, French political culture has consistently prioritised strategic autonomy. Technology isn't merely about capabilit: it's about independence, about ensuring French and European agency in shaping civilisational futures rather than accepting outcomes determined by others. Sadin calls this resisting "la silicolonisation du monde": the colonisation of society by Silicon Valley's technological and ideological dominance.s

France's National AI Strategy reveals priorities that seem almost antithetical to Albania's rush to implementation. The Jean Zay high-performance computing facility represents investment in sovereign computational infrastructure. This isn't glamorous, but it embodies recognition that digital sovereignty requires owning the foundational tools, not merely deploying algorithms created elsewhere on infrastructure controlled by foreign powers.

The emphasis on talent development proves equally revealing. Plans call for training at least two thousand students annually in AI-related programs, with two hundred additional doctoral theses per year. This represents understanding that sustainable AI governance requires building human capability, not replacing human judgment with algorithmic decision-making.

Even where AI deployment occurs, France insists on ethical frameworks first. For the 2024 Paris Olympics, intelligent cameras received authorisation for security purposes, but only experimentally, only until March 2025, and solely for specified functions. The temporal limits, purpose restrictions, and ethical oversight echo Montesquieu's classical framework of separated powers and constrained authority, now extended from human governance to algorithmic control.

As an EU member state, France implements the comprehensive EU AI Act, which establishes strict requirements for high-risk AI systems, mandates transparency for general-purpose AI models, and imposes penalties reaching €35 million or seven percent of turnover for serious breaches. France has designated three authorities for AI oversight, transforming regulatory architecture into foundation for trustworthy AI deployment.

At the April 2025 UN Security Council meeting, France's representative articulated the governing philosophy: "France is committed to build a multi-stakeholder, inclusive and sustainable international governance of AI, putting it at the service of the general interest, development, sustainability and progress for all." This wasn't mere diplomatic rhetoric. France hosts the Global Partnership on AI secretariat at the OECD in Paris, with centers of expertise in Montreal and Paris, reflecting sustained commitment to shaping international AI governance rather than merely responding to frameworks developed elsewhere.

This approach reflects what might be termed "constructive pessimism" about technological determinism. French policymakers don't believe that leaving AI development to market forces will naturally produce outcomes aligned with democratic values or European interests. They see concentrated American technological power and Chinese state-directed AI development as creating dependencies that democratic societies must actively resist through alternative capability-building. The goal isn't merely using AI effectively; it's ensuring democratic societies retain agency in determining how AI shapes human futures.



Australia: Democratic Accountability as Foundation


When considering Australia's approach to AI governance, what strikes me most forcefully is how profoundly different its philosophical foundation proves from both Albanian techno-solutionism and French sovereignty-building. Australia's framework emerges from democratic values I explored in my previous article on the Australian fair go, that cultural principle emphasising not merely equal opportunity but accountability of those wielding power and transparency as democratic right rather than discretionary favor.

In September 2024, a full year before Albania's dramatic announcement, Australia implemented its Policy for the responsible use of AI in government, applicable to all non-corporate Commonwealth entities. The policy positions government as "exemplar" in safe and responsible AI adoption. That word choice proves crucial. Australia isn't experimenting on citizens, isn't rushing to deploy cutting-edge capabilities. Government must demonstrate best practices, must prove technologies serve democratic values, before expecting broader societal adoption. It embodies institutional humility rarely seen in discussions of technological innovation.

Australia's approach builds on eight AI Ethics Principles that function not as aspirational ideals but as mandatory operational requirements for government agencies. Human wellbeing must guide deployment. Human rights, diversity, and autonomy require respect. Systems must demonstrate fairness, inclusivity, and accessibility. Privacy protection isn't negotiable. AI must prove reliable, transparent, and explainable. Decisions must remain contestable. Accountability must remain clear and human. These aren't technical specifications, they're democratic commitments encoded as institutional requirements.

In June 2024, Australian federal, state, and territory governments agreed to a National Framework for the Assurance of Artificial Intelligence in Government, establishing five "cornerstones" that governments must implement to ensure effective application of ethics principles. This framework emphasises principles-based approaches providing flexibility as technology evolves, nationally consistent standards allowing jurisdictional adaptation, continuous learning through shared experiences, and government as exemplar for economy-wide AI safety standards.

The transparency requirements prove particularly significant. Under Australia's policy, agencies must publish AI transparency statements detailing their AI use, with deadlines for high-impact systems established for February 2025. This isn't Albania's promise of transparency through government-controlled "logging and monitoring." This constitutes mandatory public disclosure that citizens can read, understand, and challenge.

Australia learned this lesson the hard way. The "Robodebt" scandal, where an automated debt recovery system wrongly pursued thousands of welfare recipients, ultimately forcing the government to rescind 400,000 debts, demonstrated precisely what happens when algorithmic systems operate without adequate transparency and human oversight. Services Australia's current strategy explicitly acknowledges this history, committing to "robust and responsive governance" and making resources accessible for external verification.

The Digital Transformation Agency's acknowledgment that "AI technologies are evolving rapidly, so our policies and standards will evolve with these advancements and community expectations" reveals something essential about Australian governance philosophy. This represents governance expecting to learn by doing, building feedback mechanisms and adaptation processes, refusing to claim perfect foresight about technological trajectories or social impacts. It embodies practical humility about uncertainty while maintaining firm commitment to democratic values.

Australia commits to strengthening scientific understanding of AI capabilities and risks, participating in the International Network of AI Safety Institutes and progressing national and global knowledge on technical AI safety. But where France emphasises building capability for strategic competition, Australia emphasises demonstrating trustworthiness for public confidence. Where France discusses "sovereignty" and "strategic autonomy," Australia employs language of "responsible use" and "community expectations." These linguistic differences reveal deeper philosophical divergences about what democratic AI governance fundamentally requires.

This approach reflects what we might call "practical egalitarianism" applied to technological governance. The fair go principle isn't merely about ensuring everyone gets opportunities. It insists that those wielding power face higher standards of accountability, not lower ones. It treats transparency not as government discretion but as citizen right. It embodies deep skepticism about concentrated power, whether exercised by governments, corporations, or algorithms. These cultural commitments shape Australian AI governance in ways that parallel how French strategic autonomy concerns shape France's approach, but toward fundamentally different ends.



Cultural Foundations of Technological Governance


These divergent approaches to AI governance reveal something profound about how cultural identity shapes technological futures. Albania, France, and Australia aren't simply implementing different policies, they're expressing different conceptions of democratic governance, state power, and citizen rights through their technological choices.

France's approach proves inseparable from centuries of French political thought about state power and national independence. The French state has historically conceived itself as architect of national destiny, actively shaping outcomes rather than merely responding to circumstances. This produces ambition to build digital sovereignty as strategic national priority, to invest billions in computing infrastructure while training thousands of AI specialists, to construct international coalitions ensuring Europe isn't relegated to digital colony status. When French policymakers discuss AI governance, they're ultimately discussing French and European agency in determining civilisational futures. The emphasis falls on capability, sovereignty, and strategic positioning.

This reflects what might be termed the Gaullist synthesis applied to digital technology: recognition that dependence on foreign technological capabilities creates vulnerabilities no democratic society should accept, combined with confidence that patient state-led investment can build alternatives to market-dominant foreign platforms. It embodies "constructive pessimism" about technological determinism, a belief that without active state intervention, market forces will concentrate power in ways undermining democratic sovereignty. The logic proves coherent within French political culture's historical evolution, reflecting lessons learned from earlier dependencies and vulnerabilities.

Australia's approach emerges from fundamentally different cultural foundations. The fair go principle, as I explored previously, isn't merely about equal opportunity and concerns accountability of power and transparency as democratic necessity. When Australians insist that government must be "exemplar" in AI use, when they mandate public transparency statements, when they build contestability into AI-driven decisions, they're expressing deep cultural skepticism about concentrated authority. The emphasis falls not on strategic capability but on democratic accountability, not on sovereignty but on transparency, not on state ambition but on citizen protection.

This reflects what might be termed Westminster egalitarianism adapted to technological governance: belief that those wielding power require constant scrutiny, that transparency isn't discretionary favor but fundamental right, that rules must apply equally to all, especially to rule-makers themselves. It embodies "practical egalitarianism" about power relationships and a recognition that power naturally seeks expansion, that institutions naturally resist accountability, that only sustained citizen vigilance preserves democratic values. The logic proves equally coherent within Australian political culture's historical evolution, reflecting distinctive colonial heritage and nation-building experience.

Albania's desperate gamble highlights what occurs when institutional weakness meets technological optimism without adequate foundation in either sovereignty-building or accountability mechanisms. The hope that algorithms can substitute for the difficult work of institutional development, that technological solutions can bypass political challenges, that "corruption-free" systems can emerge from corrupt contexts simply through algorithmic deployment. This hope reflects neither French confidence in state capability nor Australian insistence on transparency. It represents what we might call "techno-magical thinking",a belief that technological deployment alone transforms institutional reality. Rousseau warned that democracy requires citizens who actively engage rather than passively accept; Albania's AI minister inverts this logic entirely, suggesting that the solution to political failure is removing humans from the equation. 

These aren't merely policy differences but competing visions of democratic governance itself. France sees democracy requiring sovereign capability to resist foreign domination. Australia sees democracy requiring transparent accountability to resist any concentrated power. Albania hopes technology can substitute for both sovereignty and accountability. Each vision captures something true about democratic requirements while potentially underestimating what the others emphasise.



Mutual Learning as Democratic Imperative


Observing these divergent approaches from my position working across French and Australian contexts, what strikes me most powerfully isn't which proves "correct" but how fundamentally complementary they could become. France and Australia address different essential aspects of democratic AI governance, and each offers insights the other needs.

What France might learn from Australia centers on transparency as foundation for legitimacy. France's EU AI Act framework proves technically sophisticated, yet Australia's mandatory public disclosure creates qualitatively different accountability. The Robodebt scandal demonstrates why this matters: when algorithmic systems operate behind closed doors, damage to citizens and democratic trust compounds. Transparency isn't regulatory compliance: it's the democratic relationship between citizens and government.

Australia's explicit acknowledgment that AI policies "will evolve with advancements and community expectations" embodies pragmatic iteration that contrasts with France's architectonic approach. Sometimes starting with good-enough frameworks and improving through implementation proves superior to waiting for perfect comprehensive solutions.

What Australia might learn from France centers on strategic ambition. Australia builds excellent governance frameworks for using AI responsibly. But who controls the AI systems being governed? France reminds us that sovereignty matters, that dependence on foreign technology creates vulnerabilities. Sadin's critique of "silicolonisation" isn't paranoia. It's recognition that when American corporations control foundational technologies, democratic choice becomes constrained.

France's willingness to invest €109 billion over multiple years, to build sovereign computing infrastructure, to train thousands of specialists represents commitment Australia might need to match. For those of us facilitating Franco-Australian business relationships, these differences aren't merely academic. They shape everything from regulatory compliance to technology partnerships to strategic positioning. Understanding why France insists on sovereignty while Australia emphasises transparency helps navigate the practical challenges of operating across both contexts.

Albania's experiment, whatever its outcome, will provide lessons valuable to both France and Australia. We'll learn whether algorithmic oversight can reduce certain forms of corruption, whether constitutional frameworks matter for AI governance legitimacy, whether "transparency through logging" produces actual accountability. These lessons prove valuable precisely because Albania attempts what neither France nor Australia would undertake: technology substituting for institutional development.

The most important lesson concerns recognising cultural difference as democratic resource. France and Australia approach AI governance differently not because one correctly understands democratic requirements while the other doesn't, but because they embody different essential truths. France reminds us that democracy requires capability to shape our futures. Australia reminds us that democracy requires transparency and accountability. Both prove necessary. Neither suffices alone. In my work facilitating connections between French and Australian organisations, these aren't abstract philosophical points, they're daily realities shaping how projects succeed or stumble.

The challenge of governing AI in democratic societies proves ultimately inseparable from deeper questions about preserving values that make democracy worth preserving. Albania's dramatic experiment, France's patient sovereignty-building, and Australia's ethics-first accountability each address aspects of this challenge.

France's focus on sovereignty addresses genuine vulnerability: democratic societies losing control over foundational technologies lose agency in shaping their futures. Australia's focus on transparency addresses genuine legitimacy: without citizen accountability, democratic governance risks becoming technocratic administration. Albania's hope that technology alone can transform governance highlights what happens without adequate foundation in either. Where Diella promises to bypass corrupt humans entirely, France builds institutions to control technology, and Australia insists that even the most sophisticated systems must remain answerable to citizens, not replace their judgment..

The opportunity for collaboration lies in recognising these differences as complementary. Imagine approaches combining France's infrastructure investment with Australia's transparency requirements. Imagine sovereign computing capability operated under frameworks requiring public disclosure. Imagine international coalition-building focused not merely on competing with tech giants but on establishing global standards for algorithmic accountability.

From my vantage point working across French and Australian contexts, I see daily how these different cultural logics shape responses to shared challenges. When French colleagues express frustration that Australia invests insufficiently in sovereign AI capabilities, I explain the Australian emphasis on proving AI respects democratic values before rushing deployment. When Australian colleagues wonder why France seems more concerned with competing than serving citizens, I explain French historical experience where dependence created vulnerability. Both perspectives contain validity. The question isn't which proves correct but how their combination might produce governance frameworks superior to what either tradition generates alone.

Albania's experiment will succeed or fail in coming years, providing lessons regardless of outcome. France's sovereignty-building will continue its patient trajectory. Australia's ethics-first frameworks will evolve with technological advancement and community expectations. The conversation continues, the experimentation proceeds, the learning accumulates.

What remains clear is that artificial intelligence will increasingly shape how governments operate and serve citizens. The choices democratic societies make today about AI governance will determine whether democratic values strengthen or weaken in coming decades. For those of us committed to democratic renewal in technological age, the imperative lies in learning from multiple traditions, recognising cultural difference as resource for innovation, building governance frameworks that preserve what each tradition values while incorporating insights from others.

This emerging question has cultivated profound intellectual engagement within me, revealing democracy's untapped potential when different cultural approaches inform rather than compete with each other. The convergence of French strategic ambition and Australian democratic pragmatism suggests pathways for governance innovation that honors both sovereignty and accountability.

I remain curious about initiatives where cross-cultural democratic innovation meets practical implementation. Conversations about collaborative projects and professional opportunities in this space always welcome.



Arnaud Couvreur 18 October 2025
Share this post
Pourquoi l'Australie est une Opportunité Stratégique pour les Entreprises Françaises en 2025