AI BLITZ

(Photos credit Meta, Google, Microsoft, OpenAI)

It’s everywhere :(

Artificial Intelligence is complicated, controversial and now virtually omnipresent. So, let’s start with a wonderful integration of AI.

A friend of mine serves in an executive role at the MLK Community Health Foundation. The foundation secures private support for MLK Community Hospital in South Los Angeles, raising awareness and resources specific to the hospital’s mission of caring for a medically underserved area.

Recently, my friend informed me the hospital has developed life-saving AI tools, as well as utilizing AI to perform work that frees up nurses to be nurses. For example, delivery of medications and linens are no longer handled by nurses. AI-driven robots do those menial tasks. ER doctors are supported by AI while evaluating critically ill patients. AI poses questions to doctors, helping ensure they arrive at the correct diagnosis. MLK Health provides a powerful testament to how AI can save lives and improve efficiency (without taking human jobs).

In the healthcare setting, AI appears to be a great thing, e.g., providing technology that decreases the time it takes to make a diagnosis while reducing human errors. Of course, patient privacy must be protected and there must be ethical procedures in place to guard against abuse of AI technology. But AI does help save lives.

Travel 1,800 miles east, to South Memphis, Tennessee, and another side of AI emerges. While AI in South L.A. saves lives, AI in South Memphis is destroying lives. A report on 60 Minutes, corroborated by local news outlets and community leaders, reveals a negative impact of AI.

Elon Musk, responsible for cutting food aid to millions of Africans, which will result in the deaths of hundreds of thousands of people, is at it again in Tennessee.

Musk, without permits, installed 33 gas turbines to power his huge AI processing plant in South Memphis. Situated in a historically Black neighborhood, Musk’s plant steadily exhausts toxins. These cancer-causing pollutants emit a foul odor and a visible blanket of hazy smoke. Residents largely remain indoors. Children are forced to play inside. Thanks to 60 Minutes and community pressure, Tennessee officials forced Musk to obtain permits for 18 turbines. Permits or not, people are getting sick because of AI in South Memphis.

AI can make a lifesaving difference in hospitals. (Photo credit The Weekly Opine)

Environmental disaster

Currently, AI is a drag on resources. The giant servers that perform AI calculations need huge amounts of water (or electricity) to cool the servers. The Washington Post, in conjunction with University of California-Riverside researchers, did an eye-popping analysis of Open AI’s ChatGPT.

The analysis discovered that a single 100-word email, generated by ChatGPT, requires one bottle of water. In some towns, AI data centers suck up more water than most local businesses. In places where water is at a premium due to drought conditions, electricity is used to cool AI servers. What happens when AI hogs electricity? Consumer electric bills go up, as the electric grid struggles to meet outsized AI demand. A single 100-word ChatGPT email uses enough electricity to light 14 LED bulbs for one hour.

AI data centers remove large amounts of electricity or water needed for residents to live their daily lives. Homes near data centers lose value. In Oregon, the town of Dalles loses 25% of its water to the nearby AI data center. The D.C. Post/UC-Riverside analysis shows that Google’s carbon footprint increased 48% and Google replaced only 18% of the water it consumed for cooling AI servers. Google claims it will eventually “eliminate water consumption completely.” We’ll see.

Some large data centers use millions of gallons of water annually. In L.A., residents are urged to stop using ChatGPT so more water is available to fight wildfires.

Not surprisingly, climate change advocates are alarmed by the negative impact AI has on the environment.

Taking the plunge

Recently, Indiana University announced plans to accelerate understanding of and usage of generative AI on campus. Faculty leadership at IU’s Kelley School of Business designed what’s called GenAI 101. The online AI course is free for all IU students, faculty and staff. IU’s approach appears to be sober and well-conceived.

IU Media School Dean David Tolchinsky said, “The research and email-honing is incredibly powerful and helpful.” Introspectively, he also thinks about “how [to] maintain one’s own messy creative process and authentic voice.” Tolchinsky cautions that AI creates “an atmosphere where it’s easy to let go of what’s difficult but necessary.” The dean also says “humanity must be preserved.” The Media School plans to emphasize ethics and mental health related to AI use.

Indiana University just launched an innovative AI learning tool. (Photo credit The Weekly Opine)

Nancy Kaffer, an opinion writer, believes we are moving into “unproven tech” with “devastating environmental costs and ethical problems.” Robin Zebrowski, a professor of cognitive science at Beloit College, offers that “AI will always be filled with inaccuracies because it has no concept of the truth.”

What’s sorely needed, but absent, is stringent government regulatory oversight. As Zebrowski points out, those with the most to gain – Sundar Pichai (Google), Mark Zuckerberg (Meta), Tim Cook (Microsoft) and Sam Altman (Open AI) – are the people the Trump administration relies on for AI advice. Kaffer reminds us that, not coincidentally, in the last few years major players Google, Microsoft and Open AI disbanded their ethics teams.

Former Google ethicist Temnit Gebru says a chief problem she sees with AI is that data systems “are trained on data sets generated by people with [more] access to technology.” Gebru says this creates “dataset bias” whereby wealthy folks have an advantage.

Kaffer believes we are creating a generation unable to think critically, having relied on AI to do their bidding. The fear of a national brain drain is real as humans think less for themselves, opting to hand the keys to AI.

A reality check

Jessica Calarco is a sociology professor at the University of Wisconsin-Madison. Her approach to teaching her freshman students about generative AI is laden with common-sense practicality. Calarco dissuades students from using AI, explaining why in a masterpiece graphic (see below) created using PowerPoint. Here are some of the pitfalls of generative AI that Calarco reviews:

·        Chatbots’ work is mediocre, sometimes plain wrong. It’s not vetted for accuracy and can be full of factual errors with messages reinforcing stereotypes (futurism.com).

·        Calarco says generative AI-produced writing “tends to lack creativity and critical thinking.”

·        Weakened critical thinking skills is one step that can lead to mental health issues, such as “AI Psychosis,” a condition resulting from overuse of chatbots that exacerbates delusional thinking (psychologytoday.com).

·        Generative AI chatbots use copyrighted material pirated from writers, artists and creators, without permission or even attribution (theatlantic.com).

·        Use of AI threatens to exacerbate climate change (npr.com).

Calarco does not employ AI detection software because it is “prone to errors” (citi.news.niu.edu). The AI detection software can lead to surveillance and punishment that disproportionately affects students from racially marginalized groups (commonsensemedia.org).

Calarco matter-of-factly states that students using generative AI may “produce work that falls short and may negatively impact [their] grade.”

This makes sense. (Graphic credit Jessica Calarco)

Net net

Back to healthcare, automation, in the form of AI robots, is being used in clinical trials in Mexico City. Robots, taking far less time, are replacing humans in the time-consuming, laborious in-vitro fertilization process. Twenty human babies have been born so far. Another example of the astonishing good done by AI.

Favorable AI outcomes in healthcare are encouraging. There are positives at schools and businesses where AI enhances administrative efficiency. But there exist glaring negatives that should not be sanitized. Our deregulation atmosphere opens the door for greedy, unsavory characters to abuse AI. We’re creating a generation of humans unable to think for themselves. Some will (or already do) live in isolation, dependent on social media and chatbots for daily interaction. Already, much of society feasts on a diet of instant gratification. Why exhibit curiosity when easily available, chatbot-induced social media posts reinforce your belief system? 

Just yesterday I read about a Dutch actor who created an AI “actor” named Tilly Norwood. Norwood looks and sounds very much like a human. The synthetic Norwood was created by computers that were programmed using content from unsuspecting real-life actors. The BBC reports Tilly Norwood is now being courted by real-life Hollywood talent agents. Frankly, scary stuff, especially considering a Microsoft survey found that 73% of adults say it’s difficult to distinguish AI from real images.

Photo credit PARTICLE6

The AI genie is out of the bottle. I have no illusion AI is going away. But I do wonder, for example, how will safety net programs like Social Security be funded if AI causes widespread job losses?

In May of this year, Scientific American surveyed a group of AI researchers. Alarmingly, the researchers said a plausible outcome of AI development is human extinction. The AI researchers didn’t say it was a sure thing. They didn’t say it was probable. However, they did say it was plausible which, according to Webster’s, is defined as “appearing worthy of belief.”

There has been disagreement among other scientists. Some say humans are too adaptable to allow extinction caused by AI. However, in 2023, the BBC reported that the heads of OpenAI and Google Deepmind concluded, along with other experts, that AI could lead to the extinction of humanity.

It’s probably a good idea to pump the brakes on AI.


© 2025 Douglas Freeland / The Weekly Opine. All rights reserved.

Douglas Freeland