The Issue of AI: (Part 1) AI Basics
AI Basics: GenAI, AGI and ASI
General AI (GenAI)
Artificial Intelligence (AI) is a synthetic version of human cognition, a computational system designed to imitate mental processes. The foundation of the current version, Generative AI (GenAI) is a system called “neural networks,” which refers to complex algorithms modeled on cognitive mechanisms. The Neural Networks system was mainly the product of two researchers working independently. In 2000 John J. Hopfield completed the neural networks concept that would calculate in a brain-like manner; and in 2012 Geoffrey Hinton (the “godfather” of AI), finalized the computational “neural networks” system that is the basis of today’s artificial intelligence. Dr. Hinton established the AI lab at Google in 2017, but left later to publicize the need for controls on AI. In 2024, the two professors were awarded the Nobel Prize in Physics.
To achieve this brain-like capability, the deep-learning neural networks system is trained on massive data harvested mostly from the Internet, and comprised of many varieties of text, images, audio –almost anything. With the training, described below, AI systems learn to generate all sorts of material and perform many tasks.
The current AI model is Generative AI (GenAI), which is the base for all the current AI systems, including search engines, videos, chatbots, talking assistants (Siri), and autonomous cars. GenAI’s text-producing component is formed by a subset called Large Language Models (LLMs). This text base is formed from a huge collection of language samples trained to match different patterns, which are converted to sophisticated algorithms.
Other GenAI components, such as images, videos, music, are produced by different systems; the data for each category would be subjected to pattern training resulting in algorithms associated with tasks such as video production.
While most AI programs/apps require user commands, a group of systems that are “agentic” operate with complete independence. A good example is the Waymo auto, an autonomous vehicle that navigates entirely on its own. Other types of “agents” can provide types of assistance (arranging meetings) and even co-piloting.
GPT. The most advanced innovation of the neural network is the model known as Generative Pre-trained Transformer (GPT). Like other GenAI models, GPTs are trained on huge datasets to “learn” patterns. But the transformer feature adds much more power. This is evident in ChatGPT apps –chatbots-- such as Claude that can engage in very realistic conversations –although sometimes with “hallucinations” (incoherent of nonsensical responses). GPTs can also produce other forms of text and images.
The chatbot innovation can be very useful, but there is an issue: the emotional dependency chatbots encourage for their human interlocutors. Vulnerable people may even believe their chatbots are human –or better than human. Developers might exploit such tendencies to manipulate large sectors of a population.
An example of potential manipulation is apparent in Elon Musk’s Grok Chatbot, which has been trained to project Musk’s personal views, which can include misinformation. For example, a climate scientist asked Grok: Is climate change an urgent threat to the planet? Grok’s reply acknowledged scientific findings about climate change, but added the views of climate denialists, treating those views as legitimate.
AI Training. The material for AI begins with raw data scooped mostly from the internet (text, images, music, etc). This must be sorted into datasets, organized for patterns, and “trained” (including labeling, prompts, feedback) so that AI can generate accurate responses to human commands.
After the raw data is uploaded, human trainers systematically apply mathematical “weights” to specific items: to letters or words, for example, to build predictable associations --as in the sequence of letters in a word, or a sequence of words in phrases or sentences. The same process is used for other types of data such as images: parts of objects, such as a cup or car, to build an AI-usable concept. This requires the application of millions of weights and a lot of “practice” to develop the end product. Finally, the product is tested for accuracy and then released for access by different AI models.
The training process requires immense energy resources and thousands of human trainers (aka “annotators”). The work is very tedious. For example, to establish an AI concept of a cup, trainers sort, label and weight many whole and partial cup images representing varieties and different angles (e.g., side, bottom). Likewise, image trainers working on drones for AI recognition must use this process and add different angles and heights in the sky to create the algorithms that will respond to user commands/inquiries.
Artificial General Intelligence (AGI) & Artificial Super Intelligence (ASI)
In 1999, when AI was just getting started, Ray Kurzweil, the famous AI researcher and popular author, predicted that AI would equal the human brain by 2030. And now each new version of ChatGPT is accompanied by a claim of increased similarity to the brain. A number of AI leaders (e.g. OpenAI, Anthropic, and Google DeepMind) anticipate that, with significant data expansion in LLM systems, a truly brain-like model will eventually emerge –more or less spontaneously. This would be Artificial General Intelligence (AGI), and, according to its advocates, this “autonomous” form of AI, will equal or even surpass human cognitive capabilities. There is an expectation among this group that AGI might also have consciousness. Kurzweil’s prediction is now embraced by an enthusiastic community of AI developers.
Silicon Valley’s anticipation of AGI, however, may reflect a limited understanding of the biological human brain. That is understandable since the human brain is the most complex system known. While AI’s Large Language Models (LLMs)and other systems are impressive, the memory systems and cognitive processes are not like that of the biological brain. The vast neural connections of the brain are not configured to absorb and retain endless data. Rather, the brain makes associations and creates novel concepts based on inherent biological and experiential capabilities. Even very young children acquire –without instruction— language structure, word associations, and concepts like “cup” or “car” without being presented with massive examples (types, colors, shapes, different angles).
AI Errors. The difference between computational nature of AI and the biological processes of the brain is underscored by the types of errors AI produces. Despite extensive training the mechanical structure of AI generates unique types of errors:
Over the past decade, AI systems have become more powerful and widely used, particularly in tasks like recognizing images. For example, these systems can identify animals, objects or diagnose medical conditions from images. However, they sometimes make mistakes that humans rarely do. For instance... an AI algorithm might confidently label a photo of a dog wearing sunglasses as a completely different animal or fail to recognize a stop sign if it's partially covered by graffiti. As these models become larger and more complex, these kinds of errors become more frequent, revealing a growing gap between how AI and humans perceive the world. https://www.nsf.gov/news/training-ai-see-more-humans
Another common error is tendency for responses to include “hallucinations”: incorrect -and sometimes nonsensical— information. Examples include citation of nonexistent data or articles, events that have not occurred, people who do not exist. This problem persists despite refinements of training. AI hallucinations are inevitable because of the statistical structure of AI. Some questions do not have predictable answers. And because an honest “I don’t know” response to an inquiry does not rank well, the competing AI corporations avoid that type of answer.Companies depend on “engagement” over “truthfulness” to increase the popularity of their programs.
Even the sophisticated GPT-5, promoted as the precursor to AGI, has produced types of errors that are very distinct from those produced by humans:
Within hours of its [GPT-5] release, critics found all kinds of baffling errors: It failed some simple math questions, couldn’t count reliably and sometimes provided absurd answers to old riddles. Like its predecessors, the AI model still hallucinates (though at a lower rate) and is plagued by questions around its reliability. Although some people have been impressed, few saw it as a quantum leap, and nobody believed it was A.G.I. https://www.nytimes.com/2025/09/03/opinion/ai-gpt5-rethinking.html
Many AI errors are forms of misinformation and therefore damaging. For example, science journals take seriously the letters-to-the-editor responses to individual articles. Journals are now reporting a disturbing number of chatbot-produced letters containing inexcusable errors. If such errors escape notice, they can impede research development and even damage reputations. But there are no restrictions on the use of chatbots to produce such letters -or texts of any type for that matter.
It is also doubtful that AI deep-learning might produce original concepts and theories like those of Einstein and Darwin, and the kinds of discoveries in mathematics, science, engineering that require abstract analysis rather than extractions from massive data. Human cognitive features combined with consciousness and social experiences appear to be necessary for the unique mental processes associated with highly original thinking.
AI Consciousness, now or in the future, is improbable simply because consciousness is inherently biological. However, AI developers cite certain AI tendencies, such as disobedience of commands or make independent decisions as an indication that genuine AI consciousness is emerging.
Indeed, the sophisticated neural network structure underlying LLMs generates enormous algorithmic connections at huge speeds. This has endowed the algorithms with “agency” to self-improve their performances without human control, resulting in AI output or decisions that the software programmers cannot account for. This AI independence differs from actual programmed independence of true agentic systems such as autonomous vehicles. The emerging unaccountable independence may seem to indicate a human-like cognitive leap, but actually represents the continuous developing complexity of AI. Economist Yanis Varoufakis has compared it with the algorithms behind the financial derivatives that led to the 2014 market crash. In that situation the financial derivatives became so complex that the developers of the algorithms could not understand the independent moves of the software as the market spun out of control.
So, while some AI developers like Sam Altman actively push the effort to develop AI consciousness, the AI community is not united in that commitment. For example, Mustafa Suleyman, a leading AI developer and currently Microsoft’s AI CEO, considers the notion of AI consciousness absurd and has adamantly objected to current efforts to create a conscious form of AI.
Artificial Super Intelligence (ASI). The possibility of a sentient Artificial General Intelligence (AGI) is promoted by a number of AI developers who basically equate AI computation with human cognition. The idea is that a significant expansion of LLMs and other AI systems will ultimately approximate human capacity for analysis, problem-solving, and creating. But for some developers, even AGI is not enough. Mark Zuckerberg, owner of Meta, Sam Altman (CEO OpenAI), Ilya Sutskever, founder of Safe Superintelligence (formerly chief scientist at OpenAI), and others, are aiming for a super intelligence, Artificial Super Intelligence (ASI). Where AGI is meant to equal human intelligence, ASI is meant to exceed human cognition. The projects are aiming for an ASI that can learn autonomously (skipping the training with datasets). Except for that stated goal, ASI is as vaguely defined as AGI.
While the supposed similarities of the AI “brain” and the biological brain are unscientific at best, some experiments are worth watching. One project is an attempt to create a type of “sensory” machine learning, in which the training emulates the way that human infants learn. A more questionable effort involves the attempted development of human-AI hybrids. Whether these hybrids can replicate or surpass the human brain’s abilities is questionable, but we might yet be surprised. Ray Kurzweil would be delighted.
AI Hype. Although complex AI systems can produce human-like cognitive functions, AGI is unlikely to emerge. A survey from the Association for the Advancement of Artificial Intelligence indicated that the majority of the distinguished respondents rejected the hype about a prospective human-like AGI or ASI. Most experts who are not involved in this enterprise regard the hype about AGI/ASI as a marketing ploy. AI companies and their investors need genuine AI breakthroughs to garner expected financial returns. And indeed, the hype is attracting huge investments.
Disproportionate investments from just seven tech companies represent about a third of the S&P 500. And that concentration, if the GenAI “prospect” does not materialize, could disrupt the economy. This is why the hype, particularly about AGI/ASI surpassing human cognitive skills, persists. But so far, the hyper-profits are not occurring.
Regardless of the unlikelihood of a synthetic AI brain, the increasing complexity of LLMs will underscore the necessity of controls on many AI models and applications. Inexplicable errors and inappropriate responses of AI are, on the one hand, a temptation to believe that a super-intelligence is emerging. But more realistically it is simply AI’s complex algorithms that make possible real dangers.
References
AI BASICS: GenAI, AGI and ASI
AI: The Promise and the Peril (Sen.Bernie Sanders & Geoffrey Hinton: YouTube 11/19/2025
Are We Seeing the First Steps Toward AI Superintelligence? (Scientific American, 12/05/2025)
‘It’s going much too fast’: inside story of the race to create the ultimate AI (The Guardian, 12/01/2025)
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (New York Times, 05/01/2023)
Will A.I. Trap You in the “Permanent Underclass”? (New Yorker, 10/08/25)
A New Way for Machines to See, Taking Shape in Toronto (New York Times, 11/28/2017)
The A.I. Prompt That Could End the World (New York Times, 10/10/25)
Elon Musk’s Grok Chatbot Has Started Reciting Climate Denial Talking Points (Scientific American, 05/28/25)
Chatbots Play With Your Emotions to Avoid Saying Goodbye(Wired, 10/01/25)
AI 2027 (Prediction)
AI Training
Code Dependent: Living in the Shadow of AI (Madhumita Murgia, 2024)
The Coming Tech Autocracy (Sue Halpern, New York Review of Books, 11/07/24)
Finally, Neural Networks That Actually Work (Wired, 04/21/2015)
Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up (New York Times, 09/30/25)
A New Way for Machines to See, Taking Shape in Toronto (New York Times, 11/28/2017)
How A.I. Is Changing the Way the World Builds Computers
Teaching large language models how to absorb new knowledge (MIT News, 11/12/2025)
Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI)
Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon (New York Times, 05/16/2025)
Microsoft AI chief says only biological beings can be conscious (CNBC, 11/03/2025)
The Man Who Invented AGI (Wired, 10/31/2025)
Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon (New York Times, 05/16/2025)
What does the future hold for generative AI? (MIT News, 09/19/25)
Silicon Valley Is Investing in the Wrong A.I. (New York Times, 10/26/2025)
What does the future hold for generative AI? (MIT News, 09/19/25)
The Singularity is Nearer: When We Merge with AIRay Kurzweil (2024)
Meta Is Building a Superintelligence Lab. What Is That? (New York Times, 06/06/25)
Artificial Intelligence May Not Be Artificial (TechXplore 09/30/25)
Tech Billionaires Seem to Be Doom Prepping. Should We All Be Worried? (BBC 10/10/2025)
Silicon Valley is cheerleading the prospect of human–AI hybrids — we should be worried (Nature, 08/12/2024)
A.I. Will Fix the World. The Catch? Robots in Your Veins. (New York Times, 06/26/2024)
Could Symbolic AI Unlock Human-Like Intelligence? (Scientific American, 11/29/2025)
‘It’s going much too fast’: the inside story of the race to create the ultimate AI (Guardian, 12/01/2025)
AI 2027(04/23/2025)
Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon (New York Times, 05/16/2025)
The Man Who Invented AGI (Wired, 10/31/2025)
AI Errors
Meet the AI Workers Who Tell Their Friends and Family to Stay Away from AI (The Guardian, 11/22/2025)
AI Hallucinates because It’s Trained to Fake Answers It Doesn’t Know (Science, 10/28/25)
Who Pays When A.I. Is Wrong? (New York Times, 11/12/2025)
The Editor Got a Letter From ‘Dr. B.S.’ So Did a Lot of Other Editors. (New York Times, 11/04/2025)
Federal judges using AI filed court orders with false quotes, fake names (Washington Post, 10/29/2025)
Holes in the Web (Aeon, 10/13/2025)
Don’t blindly trust everything AI tools say, warns Alphabet boss (The Guardian, 11/18/2025)
LLMs use grammar shortcuts that undermine reasoning, creating reliability risks (TechXplore, 11/25/2025)
Taming Silicon Valley (Gary Marcus, 2024)
AI Consciousness
Microsoft AI chief says only biological beings can be conscious (CNBC, 11/03/2025)
LLMs show a “highly unreliable” capacity to describe their own internal processes (ARS Technica, 11/03/2025)
Techno-Feudalism: What Killed Capitalism (Yanis Varoufakis, 2025)
AI Hype
Elon Musk: AI will be smarter than any human around the end of next year (Ars Technica, 04/09/2024)
Top A.I. Researchers Leave OpenAI, Google and Meta for New Start-Up (New York Times, 09/30/25)
Silicon Valley Is Drifting Out of Touch With the Rest of America (New York Times, 08/19/2025)
Meta Is Building a Superintelligence Lab. What Is That? (New York Times, 06/13/2025)
‘It’s missing something’: AGI, super-intelligence and a race for the future (The Guardian, 08/09/2025)

