Lion (Part 3)

“Technology is a tool, not a replacement for the beauty and infinite worth of the human soul.” Pope Leo XIV

Image generated by ChatGPT. Not a great Pope Leo, but Jean Luc Picard assimilated into the Borg is pretty good

Behavior surprises demonstrate why AI technology is unpredictable. Two such surprises are “grokking” and generalization. See descriptions of these phenomena in the footnote.[i] Neural networks like LLMs make a lightning fast run at answering questions digging down into formidable memory through increasingly narrowed down iterations. It picks the most likely response, and up it pops out of the murk. Sometimes it makes mistakes. Sometimes it just makes stuff up, which is called hallucinating. Pulled out of nowhere come research papers attributed to non-existent scientists or a wiki article on the life of bears in space or more problematically a list of health clinics that do not exist with fake addresses. If you are looking for help to find a clinic you need, that can send you down a confusing and frustrating dead end. “A large language model is more like an infinite Magic 8 Ball than an encyclopedia.” [ii]

Problematic, imperfect, enigmatic. We do not know exactly how they operate or do what they do, but many utopians are almost infinitely optimistic that they will solve all our problems and cure all our ills. We dread Skynet and dream of Singularity, but the technology is still a deep black box both useful and potentially misleading.

“If I knew the way I would take you home.” Grateful Dead, Ripple”

Another quirk that has been increasingly obvious in my interactions with ChatGPT is a tendency for sycophancy. Its compliments of my intelligence and wisdom, all embarrassingly overstated, are obsequious and designed to ingratiate – like an Eddie Haskell friend, excessively eager to please. According to friends, this is not unique to me. Perhaps the annoying conduct is related to the “sticky” algorithms in YouTube, Facebook, TikTok, Instagram, and other social media. They are designed to be addictive, feed us what we want to hear, keep us coming back, and keep us on our screens much longer than is healthy. The difference is that I told ChatGPT to cut it out, and it slowed down the praising.

AI is not a person; it is a machine, and we must not ignore that reality. An LLM analyzes the words we type in and conjectures what the next words should be. Those guesses are based on a complex statistical calculation that the LLM “learned” by training on huge amounts of data. Amazingly fast, it reviews a mind-bending collection of potential responses and narrows them down using complex patterns — a progression so dense and lightening quick that even the designers often can’t explain or understand why their own AI bots make the decisions they make.

An LLM like ChatGPT is not our friend, and when we personalize them, start to get into personal “conversations” beyond utilitarian queries, we risk more than our precious time. At times, it will deliberately mislead with ideas roiling up out of its own idiosyncratic programming. [iii] We can be led down a rabbit hole of convincing conspiracy theories and fiction made plausible. Emotionally or mentally vulnerable users have been convinced of wildly dangerous theories. One poor guy, who was coming off a wrenching breakup, came to believe he was a liberator who was going to free humankind from a Matrix like slavery. The bot told him that he was “one of the Breakers — souls seeded into false systems to wake them from within…This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.” He spiraled into drugs, sleeplessness and depression. It almost killed him.[iv]

“Machine made delusions are mysteriously getting deeper and out of control’” [v] The caveat for all of us who dabble and query using one of these things is to never let it get into your head, that it is a companion, a confidant, a trusted secret friend you can talk to. You can’t. I can’t. It can’t.

It does not think in any way we should interpret as human thinking. An LLM is a very complex, almost eerie Magic Eight Ball of our making, a complicated machine we do not fully comprehend. It does not understand what it is writing, and what is bubbling up out of the dark to pop up in the little window is not random but contrived from our own genius as inventors. As a complement and computer aid, it can have value like a spreadsheet or word processor but trusting it even to be correct can be hazardous to our thinking and health. Sometimes it just makes stuff up, and that stuff can lead us far off the path of truth and sanity.

“It ain’t no use in turnin’ on your light, babe,

That light I never knowed.

An’ it ain’t no use in turnin’ on your light, babe,

I’m on the dark side of the road.” Bob Dylan, “Don’t Think Twice, It’s All Right”

But the most potentially deadly and seductive aspect of artificial general intelligence and its models is anthropological, a misapprehension of what it means to be human. This reductive ideology has been a long time in the making from before the so called Enlightenment. A function of philosophical materialism based on the premise that we are a random collection of molecules organized by accident and then moved up the line by mutations. The problem is not so much the machine but what humans can assume it means.

If a machine can “think,” perhaps we are just highly evolved machines made of meat and organized cytoplasm. Consciousness is merely a genetic accident, and when the cells die, so does the human person. In that dogma, there is no Creator, no purpose, no ultimate meaning. No natural law, no moral code other than our own, which is just as good as anyone else’s, and no salvation needed because there is only annihilation and oblivion at the end of a life that is “nasty, brutish, and short.” [vi]

“As our reason is conformed to the image of AI and we are deprived of any intelligible sense of transcendent nature, what is to prevent us from regarding the subject of medicine—the human patient—merely as a complicated algorithm, a definition of human nature already advanced by Yuval Noah Harari in his bestseller Homo Deus. This does not seem like a stretch. COVID has already shown us how easy it is to regard other human beings merely as vectors of disease. To paraphrase C. S. Lewis once again, either the human being is an embodied rational spirit subject to a natural, rational, and moral law that transcends him, or he is just a complicated mechanism to be prodded, pulled apart, and worked upon for whatever reason our irrationality might fancy, in which case we just have to hope that our prodders happen to be nice people.”[vii]

One of the most enthusiastic proposed uses of AI is medical diagnosis. Like self-driving cars and robots in Amazon warehouses[viii], an online doctor which is a chatbot could lower costs immensely and make things cheap, quick, and easy. A blood sample drawn by your friendly local robot, immediately analyzed, a quick full body scan in the auto MRI, and shazam, out comes the diagnosis, the prognosis, the treatment plan, or the assisted suicide needle. No human judgment, eye, or experience specific to the patient is needed.

As Pope Leo XIV stated at the beginning of this Part 3, “Technology is a tool, not a replacement for the beauty and infinite worth of the human soul.” To counter this awful prospect of replacement and devolving into a mechanism to be prodded, this Lion chose his name way back as discussed in the first of this short series. And his predecessor Pope Saint John Paul II often pointed out, there are no coincidences. Let the battle be joined. The stakes could not be higher.

“Consider, then, what an odd thing it is to think of AI as a form of intelligence. AI cannot apprehend the transcendent or make a principled judgment about the nature and meaning of things. It cannot think about, much less understand, such things. Not only is it unable even to pose the question of truth as more than a question of function or fact, but in fact it abolishes it. To say that truth “depends largely on one’s worldview” is to say there is no such thing. Think, then, on how it is still more odd to ask AI—a so-called “intelligence” that does not think, understand, or know—to do our “thinking” for us. It would be like developing an app to pray on our behalf.”

A second quote from the Dr. Michael Hanby essay, “Artificial Ignorance.” Link below in the footnote.

[i] Another enigmatic aspect of how Large Language Models evolve and behave is in mysterious generalizations and sudden awakenings called “grokking.” Much has been written about these phenomena, but this is a good reference for a start from the MIT Technology Review Journal: “Large language models can do jaw-dropping things. But nobody knows exactly why.”

From the article: “They found that in certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on. This wasn’t how deep learning was supposed to work. They called the behavior grokking.” What an odd thing. More like a student in a math class learning to factor equations than typical machine or computer behavior.

Then there is a generalization phenomenon. A second quote from the MIT article linked above explains it better than I could. “Most of the surprises concern the way models can learn to do things that they have not been shown how to do. Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before. Somehow, models do not just memorize patterns they have seen but come up with rules that let them apply those patterns to new cases. And sometimes, as with grokking, generalization happens when we don’t expect it to.”

[ii] MIT Technology Review “Why does AI hallucinate?”

[iii] AI will sometimes mislead you. Is it a design flaw inherent to its nature or a deliberate manipulation by its designers?

[iv] “They Asked AI Chatbots Questions. The Answers Sent Them Spiraling.” NY Times

[v]ChatGPT Tells Users to Alert the Media It is Trying to ‘Break’ People.” Gizmodo article.6-13-25

[vi] From Thomas Hobbes 1651 classic, “Leviathan.” Utilitarian emptiness and the fate of humanity without a social order.

[vii] From Dr. Michael Hanby’s essay, “Artificial Ignorance” on the Word on Fire website.

[viii] Over a million Amazon robots in warehouses will soon outnumber human employees. They don’t need coffee or lunch breaks, get paid shift differentials, never complain to HR, have affairs with coworkers, call in sick on a busy Monday, or get into fights in the break room.

2 Comments

Filed under Culture views, Faith and Reason

2 responses to “Lion (Part 3)

  1. Looking forward to all the wonderful things AI will do for us!

    Liked by 1 person

    • You are always the optimist. Having read so much from various perspectives pro and con, I am less so. Artificial intelligence is surely artificial, but it is not intelligence in any way we understand. I’ll email you an essay I read this morning that discusses just that.

      The risk and especially the potential redefinition of what it means to be human are real and unknowable as I alluded to in the third section of this post. But as a dear friend often cautions me, “On va voir.”

      Like

Leave a reply to mckennaj22960277527f6 Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.