The Issue of AI: (Part 2) AI Impact

The Issue of AI: (2) The AI Impact

IMPACT: JOBS & SERVICES

At present, the US job market is floundering, with many layoffs and difficulties for those just starting careers. AI is frequently blamed for the downturn, but research on AI’s actual business impact reveals that AI capabilities are still limited, so employers still need their human employees. Economists, while generally agreeing that AI will eventually have a significant impact on jobs, presently consider market problems as the major cause of job loss at this time.

AI’s current influence on jobs is mostly in the form of adaptations, by which employees learn to use AI services for routine tasks.

According to the College Hiring Outlook Survey, about 86% of employers now offer internal training or online boot camps, yet only 36% say AI-related skills are important for entry-level roles. Most training still focuses on traditional skills rather than those needed for emerging AI jobs. https://phys.org/news/2025-10-ai-hired-skills-employed.html?utm_source=nwletter&utm_medium=email&utm_campaign=daily-nwletter

AI adaptation is evident in jobs such as software programming, particularly since the introduction of chatbots, Programmers can outsource much of the basic or tedious work to an AI assistant tool and concentrate on more specialized tasks. Some AI systems can even be used by non-programmers (for example, to build websites).

For certain careers, AI is an actual threat. Highly skilled translators, traditionally in demand for professional communications and publishers, for example, are losing to AI assistance, available in apps and other AI products.

For some fields, such as journalism, AI is both a significant benefit and a threat.AI assistant programs enable journalists to search through and evaluate large amounts of data very quickly, an indispensable help for in-depth reporting –but not a substitution for human reporters.On the downside, AI databases siphon information developed by reporters without copyright, and this has led to expensive lawsuits by companies like the New York Times against AI firms.

The field of medicine is a major beneficiary of AI assistance –but with many cautions. GenAI tools can rapidly analyze scans (CT, MRI, Xrays), produce analyses, and generate summaries of doctor-patient consultations. Many clinicians today use ChatGPT for quick information, but this efficiency gain comes with the risk of inaccurate, sometimes even fabricated results. Medical professionals can save time using AI, but lose time checking AI output.

Medical AI innovations represent important advances, but in some cases professionals are concerned about the possible ethical issues. For example, a field called neurotechnology combines AI devices with biological processes. Examples include devices such as earbuds that claim to read brain activity and special glasses that track eye movement. While some of these developments are very beneficial, they also raise potential privacy issues.

Another AI medical innovation with great potential –but requiring cautions-- is a 3D protein-folding program, DeepMind AlphaFold, developed by Google, that will hasten the development of new medications. This particular innovation is also important because the AI program is based on a select dataset rather that the massive generalized base used for most AI systems. However, this type of innovation, initially beneficial, can also be used for more sinister purposes that worry a number of AI developers.

Yoshua Bengio, the AI pioneer and world’s most cited researcher-- has warned repeatedly about serious AI threats. His major anxiety, shared by many experts, is an AI-engineered pathogen that might destroy humanity. That such a pathogen might evade biosecurity controls, a possibility that was recently confirmed:

A [2025) report in the journal Science detailed how researchers generated thousands of AI-engineered versions of 72 toxins that escaped detection. The research team, a group of leading industry scientists and biosecurity experts, designed a patch to fix this problem found in four different screening methods. But they warn that experts will have to keep searching for future breaches in this safety net. https://www.washingtonpost.com/science/2025/10/02/ai-toxins-biosecurity-risks/

AI and Job Cautions: Workslop.While the merits of AI work assistance are widely recognized, there is an unfortunate common byproduct: content so poor or error-ridden it is referred to as workslop. According to a 2025 Harvard Business Review survey, 40 percent of business respondents reported AI slop that required about two hours to fix each example, significantly reducing productivity. Such inadequacies found in many AI-generated reports and other tasks raises the question of whether AI is really a worthwhile investment for jobs that require analytical skills.

In addition to muddled composition, workslop sometimes includes nonsensical “hallucinations. ”Lawyers, for example, discover false evidence in AI-generated reports. Physicians find misinformation in their AI-generated transcriptions. Teachers using plagiarism detectors may get false cheating reports. As safeguards for such problems are lacking or nonexistent, time-consuming error-checks are crucial. However, studies show that most people using AI in their work fail to check for AI errors and misinformation. As a result, when such problems are detected by others, there may be embarrassment and even legal consequences that cancel the time and cost-saving advantages of using AI.For example:

Two federal judges in New Jersey and Mississippi admitted this month that their offices used artificial intelligence to draft factually inaccurate court documents that included fake quotes and fictional litigants — drawing a rebuke from the head of the Senate Judiciary Committee. … The use of generative artificial intelligence has become more common in the U.S. judicial system. [The 2 judges in this case] join scores of lawyers and litigants who have been rebuked for using AI to produce legal filings strewn with errors.… The mistakes in both judges’ orders were similar to those caused by AI hallucinations — where generative AI, which produces text by predicting what words follow each other from an analysis of written content, confidently invents facts and false citations — and observers quickly speculated that the errors had come from AI use. https://www.washingtonpost.com/nation/2025/10/29/federal-judges-ai-court-orders/?utm_campaign=wp_the7&utm_medium=email&utm_source=newsletter&carta-url=https%3A%2F%2Fs2.washingtonpost.com%2Fcar-ln-tr%2F45903a1%2F6901f15f78845a270f83f0ba%2F5972ff23ae7e8a1cf4b46509%2F63%2F103%2F6901f15f78845a270f83f0ba

Caution: Research & Innovations: “resmearch.” In research generally, AI can help to organize some types of experiments, analyze and summarize huge amounts of data (e.g., in medicine, astronomy or physics), but it is necessary to use such assistance with care.

Researchers using AI searches for sources relevant to their own projects often access dubious studies that should be avoided. These typically corporate-funded sources are called “resmearch” or “bullshit” science, and are characterized by omission of important associations, for example between certain toxins and health problems. Resmearch often reflects “single-factor” results –where the factor cited is either not relevant, or is only one of multiple factors. Surprisingly, these studies are quite common and are often overlooked by peer reviewers.

The Future for Jobs. So far, AI’s effect on jobs includes both benefits and risks, but the trend requires deeper analysis. As far as the future effect on jobs is concerned, the indications are mixed. The message from the AI industry is contradictory. On the one hand, AI companies claim that jobs will be preserved as workers acquire AI skills. On the other, some AI developers claim that AI will soon equal or surpass human skills –a change that will eliminate jobs.

Ultimately, replacement is the more likely trend. In a 2025 international poll, 25 percent of employers indicated that “AI could perform all or most tasks in entry-level jobs.”https://www.commondreams.org/news/ai-jobs

But even higher-level jobs are targeted for elimination.Amazon, a leading AI company, is laying off 30,000 managers in 2025:

Amazon CEO Andy Jassy is undertaking an initiative to reduce what he has described as an excess of bureaucracy, including by reducing the number of managers. …..Jassy said in June that the increased use of artificial intelligence tools would likely lead to further job cuts, particularly through automating repetitive and routine tasks. https://www.reuters.com/business/world-at-work/amazon-targets-many-30000-corporate-job-cuts-sources-say-2025-10-27/

The jobs dilemma is developing so quickly that some experts, such as Nate Soares, president of the Machine Intelligence Research Institute, suggest the possibility of a complete replacement of human workers by AI systems in the future. That possibility is amplified by former OpenAI employee Leopold Aschenbrenner, who argues that AI will reach or exceed human capacity as soon as 2027. Aschenbrenner adds that it is “strikingly plausible” that, by then, AI models will be able to replace even specialized workers such as AI researchers/engineers.

Some tech experts are optimistic about AI’s impact. Bill Gates, founder of Microsoft, asserts that AI features such as its massive database, personalized partnerships and assistance will prove economically and socially beneficial. But such optimists are in the minority. Economic vitality depends on a robust workforce. Employers may welcome the cost-savings with fewer workers, but the loss of jobs will affect the economy, and the healthy economy depends on a financially secure population. One of the winners of the 2025 Nobel prize in economics, Peter Howitt, predicts that as AI replaces many workers of all types and levels, the economic and social consequences will be very serious indeed. And those realistic prospects are not getting adequate attention.

Senator Bernie Sanders (I-VT), the ranking member of the Senate Health, Education, Labor, and Pensions Committee, issued a committee report in 2025 predicting the loss of almost 100 million U.S. jobs by 2035 across many occupations. This is a clear threat to the young generation just entering the job market.

IMPACT: COGNITIVE IMPACT

Brain Rot. Another very serious concern about AI is the impact on cognitive skills. It seems that as people increasingly rely on AI for assistance –as in software engineering, medicine, science examples above -- cognitive skills deteriorate. This is not so different from the way muscles weaken when they are not used.

Education is meant to develop skills through literacy, mathematics, and critical thinking across many subjects. This process takes time, and the actual experience of learning –-typically in the form of trial and error-- is itself a core feature of the process. At present, the educational skills of American students are particularly alarming.

Numerous studies indicate a serious decline in basic educational skills across nations, but especially in the US, which ranks 28th in the world for math, behind Japan, Canada, the United Kingdom, Germany and nearly every other major industrialized democracy. In math, American 12th graders had the lowest performance since 2005. The results, from the National Assessment of Educational Progress [NAEP}, long regarded as the nation’s most reliable, gold-standard exam, showed that about a third of the 12th-graders who were tested [in 2024] did not have basic reading skills. https://www.nytimes.com/2025/09/25/us/reading-math-scores-declines-impact.html

In the US, most of the blame so far has been placed on the distractions of social media and other types of screen-time, but educational tests will soon note the intrusion of AI on cognitive competence.

The US educational decline is apparent at the college level also, and AI is an acknowledged factor. Even top US universities are reporting that students lack the concentration to read an entire book and that students frequently rely on AI assistance in their assignments. The cognitive downside of this shift is predictable. An MIT study (2025) concluded that students’ relying on ChatGPT for academic writing tasks experienced weaker memory and analytical capacity.

The experiment used an electroencephalogram to monitor people’s brain activity while they wrote essays, either with no digital assistance, or with the help of an internet search engine, or ChatGPT. [The researcher] found that the more external help participants had, the lower their level of brain connectivity, so those who used ChatGPT to write showed significantly less activity in the brain networks associated with cognitive processing, attention and creativity. In other words, whatever the people using ChatGPT felt was going on inside their brains, the scans showed there wasn’t much happening up there. The study’s participants, who were all enrolled at MIT or nearby universities, were asked, right after they had handed in their work, if they could recall what they had written. “Barely anyone in the ChatGPT group could give a quote,” Nataliya Kosmyna [the researcher] says. “That was concerning, because you just wrote it and you do not remember anything.” https://www.theguardian.com/technology/2025/oct/18/are-we-living-in-a-golden-age-of-stupidity-technology

Dr. Kosmyna was motivated to do the brain scans study because many people using LLMs such as ChatGPT told her about experiencing memory depletion and other sorts of cognitive weakness. She is not alone in her concern that “with every technological advance, we deepen our dependence on digital devices and find it harder to work or remember or think or, frankly, function without them.” This phenomenon is referred to as “brain rot.”

Unlike the mechanical “cognitive” systems of AI, human cognitive skills are biological and require both a conscious mind and mental exercise. In the same way that use of a calculator can reduce math skills, and spellchecks can reduce spelling skills, students and workers using AI applications can effectively sidestep learning. ChatGPT can do information searches, take lecture notes, write papers, solve math problems, even answer exam questions. When students and workers outsource these tasks to AI tools, actual learning and other mental skills are sacrificed.

As public concerns about the AI cognitive effect increase, it is noteworthy that AI corporations such as Anthropic claim that student use of tools such as (ChatGPT, Claude, etc) actually help students; suggesting, for example, AI tools that can assist in the creation of different types of content (essays, summaries, questions).

To reinforce that “assistance” pitch, a number of AI companies (e.g., Microsoft, Open AI, Anthropic, Google) are now training teachers to use AI resources for lesson plans. This industry claim is that such training will improve skills of both teachers and students, and also prepare students for the future workplace. Although AI assistance saves teachers’ time, evidence so far is lacking for actual learning improvement. Nevertheless, the corporate AI educational initiative is encouraged by the present US government in an effort to increase the potential for US global AI dominance. Since many AI leaders have also candidly predicted the workforce will be soon be replaced by AI, it is not clear how these companies envision a future human workforce.

References

AI IMPACT

IMPACT: Data Centers

States push to end secrecy over data center water use (Greenwire/E&E News, 12/09/2025)

Inside the Data Centers That Train A.I. and Drain the Electrical Grid (New Yorker, 10/27/25)

Their Water Taps Ran Dry When Meta Built Next Door (New York Time, 07/14/26)

Data centers consume massive amounts of water – companies rarely tell the public exactly how much (The Conversation, 08/19/2025)

From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy (New York Times, 10/20/2025)

What the datacenter boom means for America’s environment – and electricity bills(The Guardian, 10/16/2025)

Imminent risk of a global water crisis, warns the UN World Water Development Report 2023 (UNESCO 03/23/2023)

Global soil moisture in ‘permanent’ decline due to climate change (Carbon Brief 03/27/2025)

Trump Administration Seeks to Speed Data Center Grid Connections and Expand Federal Control of Power System (Inside Climate News, 10/24/2025)

EPA Moves to Prioritize Review of New Chemicals for Data Centers (Inside Climate News, 10/02/2025)

IMPACT: Copyright

The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work (New York Times, 12/27/23)

George RR Martin and John Grisham among group of authors suing OpenAI (The Guardian, 09/20/23)

Record labels claim AI generator Suno illegally ripped their songs from YouTube (The Verge, 09/22/2025)

The fight between AI companies and the websites that hate them (Washington Post, 10/24/2025)

IMPACT: Jobs and Services

Mass Layoffs Are Scary, but Probably Not a Sign of the A.I. Apocalypse (New York Times, 11/07/2025)

AI is transforming how software engineers do their jobs. Just don't call it 'vibe-coding' (TechXplore, 09/29/2025)

AI is taking on live translations. But jobs and meaning are getting lost. (Washington Post, 09/26/2025)

AI Distinguishes Glioblastoma From Look-Alike Cancers During Surgery (Harvard Medical School, (09/29/2025)

AI can design toxic proteins. They’re escaping through biosecurity cracks. (Washington Post, 10/02/2025)

A.I. Sweeps Through Newsrooms, but Is It a Journalist or a Tool? (New York Times, 11/07/2025)

AI could automate up to 26% of tasks in art, design, entertainment and the media (TechXplore, 09/30/2025)

Answering your questions about using AI as a health care guide (Washington Post, 10/23/2025)

UNESCO adopts global standards on ‘wild west’ field of neurotechnology (The Guardian, 11/06/2025)

Stethoscopes in the sand’: Why I’m rethinking AI’s role in medicine (Washington Post, 10/07/2025)

AI and Jobs: Workslop.

Simulated Company Shows Most AI Agents Flunk the Job (Carnegie Mellon Univ,06/21/2025)

AI-Generated “Workslop” Is Destroying Productivity (Harvard Business Review, 09/25/2025)

Simulated Company Shows Most AI Agents Flunk the Job (Carnegie Mellon Univ,06/21/2025)

AI content supercharges confusion and spreads misleading information, critics warn (PBS NewsHour 10/22/2025)

AI and Jobs: Research “resmearch” or “bullshit”

AI Slop Is Spurring Record Requests for Imaginary Journals (Scientific American, 12/08/2025)

We risk a deluge of AI-written 'science' pushing corporate interests—here's what to do about it (TechXplore, 09/08/2025)

6 tips to help journalists avoid overgeneralizing research findings (Journalist's Resource, 10/01/2025)

AI assistant developed for every step of the scientific process (Phys.org, 11/04/2025)

Opposing the 'inevitability' of AI in academia is both possible and necessary, argue researchers (Phys.org, 09/12/2025)

IMPACT: COGNITIVE IMPACT

Cognitive Impact: Brain Rot.

AI-generated lesson plans fall short on inspiring students and promoting critical thinking (The Conversation, 10/17/2025)

Big Tech is paying millions to train teachers on AI, in a push to bring chatbots into classrooms (TechXplore, 10/17/25)

What Declines in Reading and Math Mean for the U.S. Work Force (New York Times, 09/25/2025)

‘These results are sobering’: US high school seniors’ reading and math scores plummet (The Guardian, 09/20/25)

PISA Scores by Country 2025

Researchers show that training on “junk data” can lead to LLM “brain rot” (ARS Technica, 10/23/25)

AI Is an Artificial Fix for American Education (The American Prospect, 10/24/25)

‘Don’t ask what AI can do for us, ask what it is doing to us’: are ChatGPT and co harming human intelligence? (The Guardian, 04/19/25)