We stand at a critical juncture where leading AI companies have stated goals to create superintelligence within the coming decade. We are talking about AI systems that can significantly outperform all humans on essentially all cognitive tasks. As a professional deeply invested in technology implementation and innovation, I am fascinated by how artificial intelligence provides unprecedented access to information and learning opportunities. Throughout my career, from industrial relations in France to hospitality management and technology implementation in Australia, I have witnessed firsthand how technology can revolutionise business processes and empower people.
However, my enthusiasm for technological progress is matched by my commitment to responsible development. The recent Statement on Superintelligence, released by the Future of Life Institute, calls for a prohibition on developing superintelligence until there is broad scientific consensus on safety and strong public buy-in. I signed this statement not because I oppose innovation, but because I believe robust governance must evolve alongside technological capabilities, especially when we approach thresholds that could fundamentally alter humanity's relationship with the tools we create.
I also strongly support non-profit organisations like FLI, which play an essential role at a moment when politics often struggles to act with clarity and vision on these critical topics. These organisations provide independent research, convene experts across disciplines, and advocate for policies that balance innovation with public safety, functions that are vital when governmental responses lag behind technological development. This need for independent oversight becomes even more critical as AI systems themselves increasingly generate false information, making it harder for policymakers and the public to distinguish fact from fiction in technology debates
Australia's Growing AI Safety Momentum
Australia is increasingly recognising the need for balanced AI governance. Over 378 Australian experts, public figures, and concerned citizens have joined calls for the federal government to take decisive action on AI safety, emphasising that realising AI's benefits requires first confronting its escalating risks. This isn't about halting technological progress, it's about ensuring we develop systems we can control and understand.
Recent polling shows 57% of Australians believe AI creates more problems than it solves, and at least 20% worry about existential risks from AI within 20 years. This is not alarmism - it's prudent risk assessment from a community that recognises both the promise and peril of transformative technology.
The Australian Government has responded with concrete action. Recently, it released updated Guidance for AI Adoption, streamlining previous frameworks while maintaining alignment with Australia's AI Ethics Principles and international standards. The government has also introduced voluntary AI safety standards and released proposals for mandatory guardrails in high-risk settings. This regulatory momentum shows that thoughtful policymakers understand a crucial principle: governance frameworks cannot be an afterthought when dealing with technologies that advance at exponential rates.
Learning from France: A Tradition of Digital Rights Advocacy
France has a strong tradition of civil society organisations advocating for responsible technology governance. Organisations like La Quadrature du Net have fought for digital rights, privacy protections, and freedom of expression in the digital space since 2008. This French non-profit demonstrates how citizen-led advocacy can hold governments and corporations accountable, challenging surveillance practices and advocating for transparent, rights-respecting technology policies across the European Union.
More recently, France has pioneered efforts to ensure artificial intelligence serves all people equitably. While co-founding OTOOL, I signed the Charte Internationale pour une Intelligence Artificielle Inclusive (International Charter for Inclusive AI), an initiative led by Arborus and Orange. Launched under the patronage of the French Secretary of State for Digital Affairs, this charter guarantees that AI is designed, deployed, and operated responsibly and inclusively. With over 137 signatories including major corporations, tech companies, and institutions, it addresses a crucial dimension of AI safety: ensuring diversity in AI development teams and controlling discriminatory biases in data and algorithms.
The Arborus Charter demonstrates that responsible AI isn't just about preventing catastrophic risks - it's about building systems that reflect human values of equality, fairness, and inclusion from the ground up. My support for both this initiative and FLI stems from the same conviction: we must shape technology to serve humanity, rather than allowing technology to reshape humanity without our informed consent.
Understanding the Fundamental Difference:
In my experience implementing technology across diverse sectors, I've learned that even the most powerful tools require human oversight to manage their risks effectively. But superintelligence represents something categorically different from the AI tools we use today.
Current AI systems (including the sophisticated models we interact with daily) are tools that augment human capabilities. When I use AI to draft code, analyse data, or generate reports, I remain firmly in control. I review outputs, catch errors, approve decisions, and maintain oversight. These systems excel at specific tasks but have clear limitations. They require human judgement to use effectively. Critically, they cannot operate autonomously, cannot set their own goals, and cannot improve themselves beyond their designed parameters. If an AI chatbot generates misinformation, a human can catch it, correct it, and prevent harm.
Superintelligence would fundamentally change this relationship. By definition, it would surpass human cognitive abilities across essentially all domains, not just narrow tasks. This creates a qualitative difference: we would be creating systems we cannot fully understand, cannot reliably predict, and potentially cannot control. Imagine trying to oversee an entity that thinks faster, learns more efficiently, and reasons more effectively than any human expert, across every field simultaneously.
The challenge isn't malicious AI. It's that superintelligent systems could pursue their assigned goals with extraordinary competence while remaining fundamentally indifferent to human wellbeing - either because we failed to specify our values precisely enough, or because human values are too nuanced and context-dependent to fully encode in algorithms. Research demonstrates this concern through AI's dual-use potential: systems designed for beneficial purposes can easily cause harm. Researchers repurposed drug discovery AI to generate 40,000 deadly toxins in just six hours. Now imagine that capability amplified by superintelligence operating at speeds and scales beyond human intervention.
This is why the Statement on Superintelligence has garnered such broad support, with over 3,000 signatures including AI pioneers Geoffrey Hinton and Yoshua Bengio, Apple co-founder Steve Wozniak, business magnate Richard Branson, and bipartisan political figures including former US National Security Advisor Susan Rice. These are not technophobes - they're people who understand the technology deeply and recognise we're approaching a threshold that demands pause and preparation.
The Reliability Challenge: A Warning About Losing Control
Today's AI reliability issues reveal why the leap to superintelligence requires extraordinary caution. If we cannot ensure accuracy in systems firmly under human oversight, how can we ensure safety in systems operating beyond human comprehension?
AI systems, even the most advanced ones, regularly generate "hallucinations" - confident but false information. In February 2025, Google's AI Overview cited an April Fool's satire about "microscopic bees powering computers" as factual. In October 2025, a $440,000 report submitted to the Australian government by one of the Big Four consulting firms contained AI-generated hallucinations including non-existent academic sources and fake quotes from federal court judgements.
The problem is accelerating, not improving. Studies show that AI hallucination rates nearly doubled from 18% in August 2024 to 35% in August 2025. Research demonstrates that about 47% of references provided by ChatGPT were fabricated, with another 46% citing real references but extracting incorrect information from them. Even OpenAI acknowledges that hallucinations are mathematically inevitable in current AI architectures, which is not a bug to be fixed, but a fundamental limitation of how these systems work.
This reveals a critical insight: Current AI makes mistakes humans can catch because humans remain smarter than the AI across most domains. We can fact-check outputs, recognise implausible claims, and maintain oversight. But superintelligence would flip this relationship. When AI becomes smarter than humans across all cognitive tasks, who checks the superintelligence's work? Who catches its "hallucinations"? Who maintains oversight when we lack the capacity to understand its reasoning?
If today's AI - firmly under human control - generates false information at rapidly increasing rates despite billions invested in safety, we should be profoundly cautious about creating systems that operate beyond our capacity for oversight. Current AI safety assessments show capabilities are accelerating faster than risk-management practices, with the gap between firms widening despite voluntary pledges: "Only $1 is spent on ensuring AI systems are safe for every $250 spent making them more powerful".
These questions of AI reliability, misinformation risks, and best practices for verification warrant continued attention from researchers, policymakers, and practitioners alike. For now, this underscores a fundamental principle: we must establish robust safety frameworks before, not after, we create systems we cannot control.
Conclusion: Choosing Responsible Progress
The goal of AI should be creating powerful tools that enhance human capabilities and solve pressing problems. We can achieve AI-powered medical breakthroughs, scientific discoveries, and educational innovations without racing toward autonomous superintelligent agents that operate beyond human control.
The Statement on Superintelligence represents a commitment to progress we can actually control. It's about choosing the future we want to build - one where technology amplifies human values, where innovation proceeds with wisdom, and where we refuse to sacrifice safety for speed. Signing this statement aligns with my values of responsible stakeholder engagement and sustainable transformation. The evidence supports this approach: Australia's regulatory development demonstrates that responsible AI governance isn't just ethically sound - it's economically necessary to unlock AI's estimated $600 billion potential.
As professionals shaping our technological future, we have a responsibility to advocate for frameworks that ensure AI remains a tool for human flourishing. This doesn't mean rejecting progress - it means championing thoughtful, inclusive innovation that considers consequences and benefits everyone.
I encourage fellow technology professionals, business leaders, and concerned citizens to learn more about these initiatives and add their voices to the conversation. Our collective future depends on the choices we make today.