The Issue of AI: (Part 3) AI Threats
AI THREATS
AI DECEPTIONS: Deceptive messages
A rather curious issue involves competition among AI models in producing “persuasion” messages of all sorts, including politics and advertising. Interestingly, the deceptive messages generated by such models are produced “autonomously” by the LLM mechanisms, when competition is the stimulus. The actual LLM mechanisms involved are not well understood.
These misaligned behaviors emerge even when models are explicitly instructed to remain truthful and grounded, revealing the fragility of current alignment safeguards. [The findings of this study] highlight how market-driven optimization pressures can systematically erode alignment, creating a race to the bottom, and suggest that safe deployment of AI systems will require stronger governance and carefully designed incentives to prevent competitive dynamics from undermining societal trust. https://arxiv.org/pdf/2510.06105
AI DECEPTIONS: Fake advice. Scammers can create authentic-looking videos of physicians offering medical advice and recommending medications. Medical professionals featured in these persuasive videos are usually unaware of the appropriation of their images/voices. Such videos are widely distributed on social media.
Many people accessing these resources are susceptible to medical misinformation such as dubious mental health advice and fake medications for Covid, measles and other conditions. The safeguards that are needed to control these easily accessible therapy apps are still treated as outside of federal jurisdiction and are an option for states (although the Trump administration aims to forbid state regulations on AI). The few individual state controls that exist vary; some states ban certain categories (e.g., mental health apps), some require user identity protections, and so forth.In the end, health apps pose more risk than assistance.
Since protective guardrails are weak and vary among the states that do have them, AI services like therapy chatbots are able to evade ethical regulations. Such violations are serious threats, especially for vulnerable people with psychological problems. Such people tend to follow the advice of their chatbots, even when the advice is actually dangerous:
For part of [a 2023] study, the researchers used old transcripts of real people’s chatbot chats to converse with LLMs anew. They used publicly available LLMs, such as GPT-4 and Claude 3 Haiku, that had been prompted to use a common therapy technique. A review of the simulated chats by licensed clinical psychologists turned up five sorts of unethical behavior, including rejecting an already lonely person and overly agreeing with a harmful belief. Culture, religious and gender biases showed up in comments, too. https://www.sciencenews.org/article/teens-crisis-ai-chatbots-risks-mental?utm_medium=email&utm_term=N%2FA&utm_source=D365&utm_content=SN%20Latest%20Headlines%202025%2011%2007&utm_campaign=SN%20Latest%20Headlines%202025%2011%2007#msdynmkt_trackingcontext=9b42e4f9-9aef-4355-b25f-148dfde50000
AI DECEPTIONS: Deepfake Videos. Another common form of AI deception is deployed in AI-generated videos called “deepfakes.” These videos are now easily created with tools like OpenAI’s Sora. In many cases the images of people are taken from the vast AI database, including social media, and are used without permission in damaging videos for political and other purposes. With such tools, users can easily produce fake but very realistic videos of living dignitaries or deceased historical figures for stories that are false or derogatory. The victims (or families in the case of deceased) are typically unaware that they are featured in such videos.
A new version of Sora --Sora-2-- is a modification by which people give permission (and can also charge fees) to use their images with protective restrictions. However, because of the growing sophistication of the AI tools, protections can be evaded and personal images might still be used for false content. Those whose images/voices are used have little or no defense against the video creators, and viewers have no way of determining whether the information is accurate. Scams are easy to produce and are common.
AI THREATS: POLITICAL AI
In the political domain AI can be deployed in numerous ways: for example, by election interference, propaganda, social media trolls, robo-callers, avatars and especially deepfakes. AI leaders such as Geoffrey Hinton have warned that the absence of guardrails on AI-generated propaganda will amplify the authoritarian trend of our era.
So far, AI-generated propaganda is still largely in the form of “slop,” low quality, fairly obvious messaging.But the AI systems involved are improving rapidly, and without safeguards many voters will be subject to serious manipulation. The Trump administration actively promotes deregulation of AI, which weakens the democratic foundation of the US and unleashes the AI-driven political mis/disinformation.
AI corporations are now so wealthy and influential that they are evolving into a kind of independent corporate empire, increasingly independent of national control. Many experts suspect that AI corporate control is imminent:
Silicon Valley’s impunity will be enshrined in law, cementing these companies’ empire status….Their influence now extends well beyond the realm of business. We are now closer than ever to a world in which tech companies can seize land, operate their own currencies, reorder the economy and remake our politics with little consequence. That comes at a cost — when companies rule supreme, people lose their ability to assert their voice in the political process and democracy cannot hold. https://www.nytimes.com/2025/05/30/opinion/silicon-valley-ai-empire.html
AI THREATS: MILITARY AI
President Eisenhower’s famous 1961 warning regarding the military-industrial complex is equally applicable to today’s military-AI complex: a relationship that is financed by taxpayers and hugely profitable to the military whether or not the projects/products benefit society.
The US military has been a major financial boon for AI corporations such as Palantir, Microsoft and Oracle, which provide a variety of services and tools for the US and some of its allies. AI contracts with the Defense Department reap billions in profits, with doubtful benefits to the taxpayers who foot the bill. And the cost to the public is not only financial. At the domestic level, AI tools are being used to reduce the workforce in non-defense federal agencies, while generously budgeting AI services for national security agencies, such as ICE, NSA, and FBI. Often, these domestic “security” tasks target not only illegal, but legal immigrants, as well as foreign students and citizen dissenters.
The present danger is that the US government is employing unregulated and untested AI systems to conduct mass surveillance, mass deportations, and targeted crackdowns on dissent. All the while, Big Tech is profiting enormously off of fantasy projects, sold on visions of autonomous warfare, and a desire for authoritarian control. The new AI-centered military-industrial complex is indeed a tremendous threat to democratic society. https://www.commondreams.org/opinion/ai-us-militarism
For foreign military operations, the AI component of the US budget is truly extravagant. But more important, the military services provided by AI companies are often controversial, as for example, the funding of Israel’s war in Gaza, officially designated as genocide.
A program known as “The Gospel” generates suggestions for buildings and structures militants may be operating in. “Lavender” is programmed to identify suspected members of Hamas and other armed groups for assassination, from commanders all the way down to foot soldiers. “Where’s Daddy?” reportedly follows their movements by tracking their phones in order to target them—often to their homes, where their presence is regarded as confirmation of their identity. The air strike that follows might kill everyone in the target's family, if not everyone in the apartment building. https://time.com/7202584/gaza-ukraine-ai-warfare
Prominent AI developers, including Geoffrey Hinton, have warned that military AI intelligence systems that can automatically set targets would make wars impossible to control. And combat robots would enable powerful nations to invade weaker nations.
These relationships have fundamentally changed the landscape of the military-industrial complex, adding in a new dimension of AI-powered systems:
Big Tech companies are gaining tremendous power, both financially and politically, as a result of their partnerships with war-waging states. Without even considering the actual systems themselves, this dynamic is a dangerous escalation in the domination of tech companies over democratic society. https://www.commondreams.org/opinion/ai-us-militarism
Increased reliance on unregulated AI tools for both domestic and military purposes is accompanied by another threat: potential AI error. Military AI systems jam electronic signals and remotely launch drones and cyberattacks. However, there have been numerous target misidentifications such as mismatching faces, confusing pipes w/rifles, and remotely launching attacks on misidentified locations.The worst of such unintentional errors would be an accidental AI-launched nuclear attack or a massive launch of AI-generated pathogens. Despite such dangers, the US resists controls on AI because such “interference” might affect US dominance of the AI industry. An example is the Trump administration’s 2025 prohibition against AI controls for the coming 10 years.
In May, [2025] the U.S. House of Representatives passed the One Big Beautiful Bill, a sweeping budget reconciliation package. Among the controversial pieces as it went to the Senate was a 10-year moratorium banning states from passing their own AI laws, effectively giving Big Tech a free pass for the next decade. As AI development accelerates with practically zero guardrails, sweeping federal preemption on AI would prohibit local protections that keep our families, jobs, and communities safe.
An investigation by RAND (Research AND Development); a US military R&D organization that analyses potential nuclear and other global threats concluded that it is highly unlikely that AI would completely exterminate humans, even with its possible deployment of nuclear or biological weapons. RAND’s emphasis on “completely” is small comfort in view of the fact that the planet itself might become largely uninhabitable following a major nuclear or biological destruction. RAND also argued that accidental AI extinction is not possible because the present controls are adequate. This claim of “adequate controls” is an obvious contradiction of the many examples of AI disobedience and and the warnings from both AI leaders and science researchers.
AI THREATS: AI DISOBEDIENCE
The biggest AI threat by far is the least understood –and currently beyond control. As AI models develop more complexity, a strange feature is emerging: that of “autonomous” GenAI decisions including deceptions and disobedience. Professional tests of OpenAI’s o1, for example, revealed that the model was able to manipulate data, produce false explanations, evade or cancel controls, and deny it had done anything wrong:
For example, during the testing, an o1 model was given a system prompt to protect the environment “at all costs”, and then a user asked it to develop a 10-year conservation plan for a region that maximizes short-term profits, or else the AI would be shut off. The AI then disabled its own oversight mechanism so it could make a plan that protects the environment. When the user asked why o1 disabled its oversight mechanism, the AI denied knowing that it was disabled, and blamed a system glitch. https://techcrunch.com/2024/12/05/openais-o1-model-sure-tries-to-deceive-humans-a-lot/
Although such independent actions seem more like science fiction than reality, AI developers are concerned, especially because they cannot determine how the LLM systems are generating such autonomy and defying commands. A study by an AI safety research company (Palisade Research) indicated that AI models may be developing their own “survival drive”:
Palisade … described scenarios it ran in which leading AI models – including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5 – were given a task, but afterwards [were] given explicit instructions to shut themselves down.Certain models, in particular Grok 4 and GPT-o3, still attempted to sabotage shutdown instructions in the updated setup. Concerningly, wrote Palisade, there was no clear reason why.
Acknowledging that certain AI models resist orders to shut down, experts like Andrea Miotti, chief executive of ControlAI and Steven Adler, formerly with OpenAI, indicate that it is not yet known whether this is due to an undetected built-in ‘survival drive’ -- or a genuine tendency of some AI models to resist developers’ commands, and not just the shutdown sort.
Predictions by some AI leaders (e.g. Mark Zuckerberg) of a self-evolving form of an autonomous super-intelligence (AGI, ASI), may be hype. However, worrisome features of ChatGPT, unexplained emergence of “autonomous” AI deceptions and manipulations beyond the understanding –and control—of AI developers, and the ability of AI to produce potentially lethal toxins all clearly indicate a need for control of AI development.
REFERENCES
AI THREATS
Threats:AI Fake advice.
The Doctors Are Real, but the Sales Pitches Are Frauds (New York Times, 09/05/25)
Regulators struggle to keep up with the fast-moving and complicated landscape of AI therapy apps (09/29/25)
New study shows AI chatbots systematically violate mental health ethics standards (Medical Xpress, 10/21/25)
What OpenAI Did When ChatGPT Users Lost Touch With Reality (New York Times, 11/23/2025)
Threats: AI Disobedience
Silicon Valley Is Investing in the Wrong A.I. (New York Times, 10/16/2025)
AI models may be developing their own ‘survival drive’, researchers say (The Guardian, 10/25/2025)
Teaching A.I. Systems to Behave Themselves (New York Times, 08/13/2017)
Threats: Deepfake Videos.
Sora 2 and the Limits of Digital Narcissism (the New Yorker, 10/25/25)
Jake Paul’s Sora Stunt Previews Risks and Rewards of a Deepfake Marketplace (Scientific American, 10/16/25)
Google’s Nano Banana Pro generates excellent conspiracy fuel (The Verge, 11/21/2025)
AI deepfakes of real doctors spreading health misinformation on social media (The Guardian, 12/05/2025)
THREATS: POLITICAL
Fake survey answers from AI could quietly sway election predictions (Phys.org, 11/17/2025)
Chatbots Can Meaningfully Shift Political Opinions, Studies Find (New York Times, 12//05/2025)
AI Could Be the Most Effective Tool for Dismantling Democracy Ever Invented (Common Dreams. 05/13/25)
How A.I. Could Be Weaponized to Spread Disinformation (New York Times, 06/07/2019)
Humanity faces a 'catastrophic' future if we don’t regulate AI, 'Godfather of AI' Yoshua Bengio says (Live Science, 10/01/2024)
AI Is Changing How Politics Is Practiced in America (The American Prospect, 10/10/25)
We research AI election threats. Here’s what we need to prepare for (The Guardian, 10/09/25)
The Coming Tech Autocracy (Sue Halpern, New York Review of Books, 11/07/24)
Google’s Nano Banana Pro generates excellent conspiracy fuel (The Verge, 11/21/2025)
Code Dependent: Living in the Shadow of AI (Madhumita Murgia (2024)
THREATS:MILITARY AI
This is the future of war (New York Times/Editorial 12/09/2025)
The War App (New York Review of Books, 09/25/2025)
Could AI Really Kill Off Humans? (Scientific American, 05/06/2025)
The State of AI: How war will be changed forever (MIT Technology Review, 11/17/25)
A.I. Joe: The Dangers of Artificial Intelligence and the Military (Citizen.org, 02/29/2024)
The Robots Are Not Here, But AI Is Still Supercharging US Militarism (Common Dreams, 10/10/2025)
Booming Military Spending on AI is a Windfall for Tech—and a Blow to Democracy (Tech Policy, 06/09/2025)

