The Issue of AI: (Part 4) Urgent Calls for AI Control
URGENT CALLS for AI CONTROL
Ironically, the man most responsible for getting AI started is now one of its most vocal critics, warning about a grim future:
The British-Canadian computer scientist often touted as a “godfather” of artificial intelligence has shortened the odds of AI wiping out humanity over the next three decades, warning the pace of change in the technology is “much faster” than expected.Prof Geoffrey Hinton, who this year was awarded the Nobel prize in physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction within the next three decades. https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years
Clearly, existing restrictions are not adequate to prevent potential AI disasters that threaten humanity. And in response to that threat, a movement to evaluate AI and to establish control is developing among experts. In 2024 hundreds of prominent AI researchers signed a statement issued by the Center for AI Safety (CAIS): Mitigating the Risk of Extinction from AI Should Be a Global Priority Alongside Other Societal-Scale Risks Such as Pandemics and Nuclear War. In January, 2025 an International AI Safety Report was issued listing demands for controls on AI development.
The 2025 Paris AI Summit issued the most emphatic demand so far for international control of AI, including a pause on AI development in order to evaluate the issues and prevent disaster. Stuart Russell offered a comparison of a similar pause that was imposed when breakthroughs on human cloning became possible. Considering the risks and ethical issues, scientists organized a a call for a pause with a possible prohibition on AI research. Dr. Russell's example helped to guide the discussion.
The leading AI companies in the US, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is one notch below ASI, some experts also warn it could carry an existential risk by, for instance, being able to improve itself towards reaching superintelligent levels, while also carrying an implicit The leading AI companies in the US, including the ChatGPT developer OpenAI and Google, have made the development of artificial general intelligence – the theoretical state where AI matches human levels of intelligence at most cognitive tasks – an explicit goal of their work. Although this is one notch below ASI, some experts also warn it could carry an existential risk by, for instance, being able to improve itself towards reaching superintelligent levels, while also carrying an implicit threat for the modern labour market. The statement calls for the ban to stay in place until there is “broad scientific consensus” on developing ASI “safely and controllably” and once there is “strong public buy-in”. threat for the modern labour market. The statement calls for the ban to stay in place until there is “broad scientific consensus” on developing ASI “safely and controllably” and once there is “strong public buy-in”. https://www.theguardian.com/commentisfree/2025/feb/14/ai-godfathers-paris-industry-dangers-future
The resulting statement, The Declaration on Inclusive and Sustainable Artificial Intelligence, was signed by thousands of prominent AI experts, scientists and dignitaries representing some 60 countries (including France, China, India, Japan, Australia and Canada). However, the US and UK refused to sign. US Vice President J.D. Vance delivered a particularly strong objection to international regulations on AI, asserting that such oversight would interfere with innovation.
For many actual experts on AI,the ominous examples of AI “independence” suggest that the laissez-faire argument of Vice President Vance and other libertarians is completely inappropriate. AI is not just another clever innovation; it is potentially a genuine and possibly imminent threat to humanity.
CONCLUSION: IS AI WORTHWHILE?
We need to determine –at an international level-- whether AI contributions outweigh its dangers. That determination would include consideration of AI’s impact on the environment (impossible space, energy, water demands) jobs, cognitive vitality, human ability to resist AI manipulation, potential AI autocracy, and potential human extermination. We should perhaps include an evaluation of the economic force of the AI bubble.
At present, as many experts have indicated, a pause on AI development is advisable, and probably absolutely necessary to protect humanity and the planet.Meanwhile, until adequate controls are applied, what viable actions could we apply?Should we just say NO?
Public distrust and misgivings about AI are leading to a public OPT-OUT movement worth noting. Some students,workers and artists are trying to avoid using artificial intelligence tools, citing concerns about accuracy, privacy or undermining their own skills. According to a recent survey by the Pew Research Center, 50 percent of U.S. adults are more concerned than excited about the increased use of AI in everyday life, an increase from 37 percent in 2021.
An Opt-Out movement might be the most effective movement {at this time.It still offers a choice to people, but it also encourages action on behalf of the world’s population. Perhaps an Opt-Out movement should be considered for the following reasons:
- the necessity of international & permanent governmental controls on AI
- the unsustainable AI data centers requirements for land, energy, and water
- the irreversible AI environmental impact
- the issue of copyright violations and the harvesting of personal data
- the problem of AI errors, misinformation, and “autonomous” disobedience
- the likelihood of massive elimination of jobs and encroachment of AI slop/content & limitations of AI agents
- the devastating impact on cognitive skills and on education (brain rot)
- the threat of political manipulation
- the particularly ominous military threat
- AND the general public distrust in AI
We need to use –and protect-- our brains, our judgement skills, our concern for the world and our courage. We need to make a decision!
REFERENCES
URGENT CALLS FOR AI Control
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead(New York Times, 05/01/2023)
Big Tech Poised to Win Immunity Shield From State AI Regulation (The American Prospect, 11/20/2025)
AI: The Promise and the Peril (Sen.Bernie Sanders & Geoffrey Hinton: YouTube 11/19/25
An Approach to Technical AGI Safety and Security
We urgently call for international red lines to prevent unacceptable AI risks
OpenAI’s o1 model sure tries to deceive humans a lot (Tech Crunch, 12/05/2024)
International AI Safety Report (October, 2025)
US and UK refuse to sign Paris summit declaration on ‘inclusive’ AI (The Guardian, 02/11/2025)
Nobel winners and celebrities challenge Silicon Valley’s vision for the future (Washington Post, 10/22/2025)
Taming Silicon Valley (Gary Marcus, 2024)
CONCLUSION: Is AI Worthwhile?
Meet the people who dare to say no to artificial intelligence (Washington Post, 10/23/2025)
Why Some People Opt Out of Using AI
Concern and excitement about AI (Pew Research Center, 10/15/2025)

