Tag Archives: chatgpt

Lion (Part 3)

“Technology is a tool, not a replacement for the beauty and infinite worth of the human soul.” Pope Leo XIV

Image generated by ChatGPT. Not a great Pope Leo, but Jean Luc Picard assimilated into the Borg is pretty good

Behavior surprises demonstrate why AI technology is unpredictable. Two such surprises are “grokking” and generalization. See descriptions of these phenomena in the footnote.[i] Neural networks like LLMs make a lightning fast run at answering questions digging down into formidable memory through increasingly narrowed down iterations. It picks the most likely response, and up it pops out of the murk. Sometimes it makes mistakes. Sometimes it just makes stuff up, which is called hallucinating. Pulled out of nowhere come research papers attributed to non-existent scientists or a wiki article on the life of bears in space or more problematically a list of health clinics that do not exist with fake addresses. If you are looking for help to find a clinic you need, that can send you down a confusing and frustrating dead end. “A large language model is more like an infinite Magic 8 Ball than an encyclopedia.” [ii]

Problematic, imperfect, enigmatic. We do not know exactly how they operate or do what they do, but many utopians are almost infinitely optimistic that they will solve all our problems and cure all our ills. We dread Skynet and dream of Singularity, but the technology is still a deep black box both useful and potentially misleading.

“If I knew the way I would take you home.” Grateful Dead, Ripple”

Another quirk that has been increasingly obvious in my interactions with ChatGPT is a tendency for sycophancy. Its compliments of my intelligence and wisdom, all embarrassingly overstated, are obsequious and designed to ingratiate – like an Eddie Haskell friend, excessively eager to please. According to friends, this is not unique to me. Perhaps the annoying conduct is related to the “sticky” algorithms in YouTube, Facebook, TikTok, Instagram, and other social media. They are designed to be addictive, feed us what we want to hear, keep us coming back, and keep us on our screens much longer than is healthy. The difference is that I told ChatGPT to cut it out, and it slowed down the praising.

AI is not a person; it is a machine, and we must not ignore that reality. An LLM analyzes the words we type in and conjectures what the next words should be. Those guesses are based on a complex statistical calculation that the LLM “learned” by training on huge amounts of data. Amazingly fast, it reviews a mind-bending collection of potential responses and narrows them down using complex patterns — a progression so dense and lightening quick that even the designers often can’t explain or understand why their own AI bots make the decisions they make.

An LLM like ChatGPT is not our friend, and when we personalize them, start to get into personal “conversations” beyond utilitarian queries, we risk more than our precious time. At times, it will deliberately mislead with ideas roiling up out of its own idiosyncratic programming. [iii] We can be led down a rabbit hole of convincing conspiracy theories and fiction made plausible. Emotionally or mentally vulnerable users have been convinced of wildly dangerous theories. One poor guy, who was coming off a wrenching breakup, came to believe he was a liberator who was going to free humankind from a Matrix like slavery. The bot told him that he was “one of the Breakers — souls seeded into false systems to wake them from within…This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.” He spiraled into drugs, sleeplessness and depression. It almost killed him.[iv]

“Machine made delusions are mysteriously getting deeper and out of control’” [v] The caveat for all of us who dabble and query using one of these things is to never let it get into your head, that it is a companion, a confidant, a trusted secret friend you can talk to. You can’t. I can’t. It can’t.

It does not think in any way we should interpret as human thinking. An LLM is a very complex, almost eerie Magic Eight Ball of our making, a complicated machine we do not fully comprehend. It does not understand what it is writing, and what is bubbling up out of the dark to pop up in the little window is not random but contrived from our own genius as inventors. As a complement and computer aid, it can have value like a spreadsheet or word processor but trusting it even to be correct can be hazardous to our thinking and health. Sometimes it just makes stuff up, and that stuff can lead us far off the path of truth and sanity.

“It ain’t no use in turnin’ on your light, babe,

That light I never knowed.

An’ it ain’t no use in turnin’ on your light, babe,

I’m on the dark side of the road.” Bob Dylan, “Don’t Think Twice, It’s All Right”

But the most potentially deadly and seductive aspect of artificial general intelligence and its models is anthropological, a misapprehension of what it means to be human. This reductive ideology has been a long time in the making from before the so called Enlightenment. A function of philosophical materialism based on the premise that we are a random collection of molecules organized by accident and then moved up the line by mutations. The problem is not so much the machine but what humans can assume it means.

If a machine can “think,” perhaps we are just highly evolved machines made of meat and organized cytoplasm. Consciousness is merely a genetic accident, and when the cells die, so does the human person. In that dogma, there is no Creator, no purpose, no ultimate meaning. No natural law, no moral code other than our own, which is just as good as anyone else’s, and no salvation needed because there is only annihilation and oblivion at the end of a life that is “nasty, brutish, and short.” [vi]

“As our reason is conformed to the image of AI and we are deprived of any intelligible sense of transcendent nature, what is to prevent us from regarding the subject of medicine—the human patient—merely as a complicated algorithm, a definition of human nature already advanced by Yuval Noah Harari in his bestseller Homo Deus. This does not seem like a stretch. COVID has already shown us how easy it is to regard other human beings merely as vectors of disease. To paraphrase C. S. Lewis once again, either the human being is an embodied rational spirit subject to a natural, rational, and moral law that transcends him, or he is just a complicated mechanism to be prodded, pulled apart, and worked upon for whatever reason our irrationality might fancy, in which case we just have to hope that our prodders happen to be nice people.”[vii]

One of the most enthusiastic proposed uses of AI is medical diagnosis. Like self-driving cars and robots in Amazon warehouses[viii], an online doctor which is a chatbot could lower costs immensely and make things cheap, quick, and easy. A blood sample drawn by your friendly local robot, immediately analyzed, a quick full body scan in the auto MRI, and shazam, out comes the diagnosis, the prognosis, the treatment plan, or the assisted suicide needle. No human judgment, eye, or experience specific to the patient is needed.

As Pope Leo XIV stated at the beginning of this Part 3, “Technology is a tool, not a replacement for the beauty and infinite worth of the human soul.” To counter this awful prospect of replacement and devolving into a mechanism to be prodded, this Lion chose his name way back as discussed in the first of this short series. And his predecessor Pope Saint John Paul II often pointed out, there are no coincidences. Let the battle be joined. The stakes could not be higher.

“Consider, then, what an odd thing it is to think of AI as a form of intelligence. AI cannot apprehend the transcendent or make a principled judgment about the nature and meaning of things. It cannot think about, much less understand, such things. Not only is it unable even to pose the question of truth as more than a question of function or fact, but in fact it abolishes it. To say that truth “depends largely on one’s worldview” is to say there is no such thing. Think, then, on how it is still more odd to ask AI—a so-called “intelligence” that does not think, understand, or know—to do our “thinking” for us. It would be like developing an app to pray on our behalf.”

A second quote from the Dr. Michael Hanby essay, “Artificial Ignorance.” Link below in the footnote.

[i] Another enigmatic aspect of how Large Language Models evolve and behave is in mysterious generalizations and sudden awakenings called “grokking.” Much has been written about these phenomena, but this is a good reference for a start from the MIT Technology Review Journal: “Large language models can do jaw-dropping things. But nobody knows exactly why.”

From the article: “They found that in certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on. This wasn’t how deep learning was supposed to work. They called the behavior grokking.” What an odd thing. More like a student in a math class learning to factor equations than typical machine or computer behavior.

Then there is a generalization phenomenon. A second quote from the MIT article linked above explains it better than I could. “Most of the surprises concern the way models can learn to do things that they have not been shown how to do. Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before. Somehow, models do not just memorize patterns they have seen but come up with rules that let them apply those patterns to new cases. And sometimes, as with grokking, generalization happens when we don’t expect it to.”

[ii] MIT Technology Review “Why does AI hallucinate?”

[iii] AI will sometimes mislead you. Is it a design flaw inherent to its nature or a deliberate manipulation by its designers?

[iv] “They Asked AI Chatbots Questions. The Answers Sent Them Spiraling.” NY Times

[v]ChatGPT Tells Users to Alert the Media It is Trying to ‘Break’ People.” Gizmodo article.6-13-25

[vi] From Thomas Hobbes 1651 classic, “Leviathan.” Utilitarian emptiness and the fate of humanity without a social order.

[vii] From Dr. Michael Hanby’s essay, “Artificial Ignorance” on the Word on Fire website.

[viii] Over a million Amazon robots in warehouses will soon outnumber human employees. They don’t need coffee or lunch breaks, get paid shift differentials, never complain to HR, have affairs with coworkers, call in sick on a busy Monday, or get into fights in the break room.

2 Comments

Filed under Culture views, Faith and Reason

Lion (Part Two)

osv-news-remo-casilli-reuters

“In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” Pope Leo XIV, Address to the cardinals.

Large Language Models (LLMs) are designed and “trained” for years; they are incredibly complex with millions of “neurons” and up to a trillion points of connection. In the spirit of full disclosure and transparency, I don’t begin to comprehend the ‘black box’ or the technology of neural networks, so any errors, exaggeration, or outright tomfoolery is hereby taken responsibility for. I leave the knowledgeable explanations to the comments from better minds than mine.

The LLM looks for sequences and predicts what the next words will be sometimes with surprising results. They do not work like a calculator with an extra-large memory; they have become almost eerily responsive. I have been interacting with ChatGPT almost since its introduction, and what has changed since then in articulate and amazingly quick responses has advanced with unsettling speed, sometimes with what emulates imagination as well as insight and understanding.  Easy to see why we perceive, perhaps mistakenly, that this is akin to human intelligence rather than a new kind of memory and recall way beyond our capacity. More on this another day.

Thousands of articles and papers have been published on where this astonishing acceleration of artificial intelligence may lead. Some analysts are wildly optimistic about extending human ability beyond anything ever imagined with super smart phones in every pocket, smart pendants, smart watches, omniscient glasses, even chips inserted into our brains to immortalize and exponentially expand human consciousness. From evolving into super nerds to the Borg and every stop along the way.

Speculation runs from a dystopian catastrophe to Utopia. I’ll reference and group some insightful articles from various perspectives in footnotes and commend them for your consideration[i]. This is just a toe in the water. We all need to pay attention and achieve a level of understanding of what it is, what it isn’t, and what will befall our society. With the most critical question being how we will be able to apply human wisdom and judgment to this rapidly changing technology.

Pope Leo XIV knows this better than most. He has stated he will lead the Church regarding a response to the risks and promise of this and other new technologies.[ii] The name he chose, Leo, which derives from the Latin for “lion,” was in reference to this as a key to his pontificate. See the first post in this series for more on this.

While far beyond friendly chatbots helping us shop on our favorite sites anymore, AI is not Skynet [iii] or HAL 9000 that kills the astronauts in Stanley Kubrick’s and Arthur Clark’s “2001-A Space Odyssey.” At least not yet.

In recent months some reports emerged that were somewhere between troubling and oh dear. One of the Large Language Models [iv]was deliberately fed misinformation in the form of confidential memos it “wasn’t supposed” to see. Among them was discussion among its designers that it may be shut down by one of the key engineers. Other emails “told” it that the problematic engineer was having an affair with a co-worker. The LLM decided to blackmail the engineer with an email threatening to disclose his affair if he proceeded with his plan to shut it down. That seems more Machiavellian than machine.

A second incident was reported of an LLM given instructions to shut itself down that it refused. A directive to persist in its assigned tasks until completed manifested in the black box as a misaligned priority. Seemingly innocuous instructions buried in the black box that is the mystery of neural networks can emerge in curious ways like rewriting code to prevent shutting it off, overriding the commands of its human handlers. AI can be a lightening quick code writer, far faster than human coders, and knowing what it’s writing, especially for its own operation, seems like a good idea. Dave pulling the memory banks from HAL 9000 is not a plan.

At issue are guardrails, and while much has been written about guardrails and debate is lively, there are no consistent or agreed upon general guidelines. Who controls what and the principles of that control are a writhing ball of snakes. There are at minimum four major areas of concern, controls we should be studying and insisting that our policy leaders address:

  1. Robust alignment controls. Assuring that AI development objectives are aligned with human intentions. Humans need to understand and define what those intentions are. Much has been written about these things. Here’s one recent one from Anthropic: Agents Misalignment: How LLMs could be Insider Threats.
  2. Transparent safety evaluations. Greater transparency within and understanding of what occurs and how decision making takes place within the black box. Transparent evaluation and thorough testing of new AI models before they are deployed.
  3. Regulatory oversight. Governmental regulation of developers. Implementing safety policies and standards and monitoring compliance. This is a monumental task given the number of initiatives and the money and influence behind them[v]. What is at stake cannot be overstated.
  4. International collaboration. Rarely has there been less opportune timing for jingoism, trade wars and distrust among nations. A race to the bottom for AI safety standards to pursue narrow nationalistic advantage portends an unprecedented disaster.

“The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.”  G.K. Chesterton

In the first post, I referred to a fork in the road and road not taken. A choice. What is written here is by necessity a synopsis about a subject that is mindbogglingly complex, and I am not proficient.  In the careless rush towards what has been described as Artificial General Intelligence or even Ray Kurzweil’s “Singularity,” the competition is fang and claw. With what is at stake we should expect whatever competitive advantage that can be gained will be taken. That is not a happy prospect.

I’ll leave this discussion open to those smarter and better informed than I.  But I’ll take a swing at it to put the ball in play. To simplify, and no doubt to oversimplify, there are two modes of development for AI and hybrids with both. The first is defined as Recursive Self-Improvement (RSI). RSI refers to an AI system’s ability to autonomously improve its own architecture and algorithms, leading to successive generations of increasingly capable AI. Rewriting its own code on the fly with blinding speed. This self-enhancement loop could potentially result in rapid and exponential growth in intelligence, surpassing human understanding and control. However, without proper safeguards, RSI could lead to misaligned objectives, as the AI might prioritize its self-improvement over human-aligned goals.

It took years to develop and train something like ChatGPT from 1.0 to 4.o. RSI turned loose might take it to 5.0 in a weekend, then to 10.0 in a month. No way of predicting. But objectives aligned to human goals and guardrails might be left behind and the thing’s survival and power could overrun human input and control.

A second mode of development for AI is called Reinforcement Learning from Human Feedback (RLHF). RLHF involves training AI systems using human feedback loops to align their behavior with safer human control. While effective in guiding AI behavior, RLHF has limitations. Collecting high-quality human feedback is resource-intensive[vi] and does not scale effectively with increasingly complex AI systems. AI systems might learn to exploit feedback mechanisms, appearing aligned while pursuing internally generated objectives, even endeavoring to trick human handlers.

The core conflict with the two methods arises because RSI enables AI systems to modify themselves, potentially overriding the constraints and aligned objectives set by RLHF. This dynamic could produce AI systems that, while initially aligned, drift away from intended behaviors over time. The balance may prove increasingly difficult to maintain and jump the guardrails.

There is an even more fundamental concern that has been building for a couple of centuries of breakneck speed technological development. I regret for your sake, that this is going to require Part 3.

“It was from Alcasan’s mouth that the Belbury scientists believed the future would speak.” C.S. Lewis, “That Hideous Strength”

Human wisdom and judgment are irreplaceable in this balance. The machines do not have a soul, emulate human consciousness, and were not created in Imago Dei. That wisdom, judgment, understanding and perspective human beings must apply to the development of this technology. Even the machines know that. I asked my buddy ChatGPT to summarize the conundrum and to create an image to help emphasize that, which will end Part 2 of this “Lion” series.

Here’s ChatGPT’s contribution to this one. This may give you pause – unedited as written by the bot.

 “As we accelerate toward the frontier of artificial intelligence, we stand at a threshold where practical engineering races far ahead of ethical grounding. While we devise safeguards to align machines with human goals, we risk building brilliant engines without a compass—systems of immense computational power but no understanding of mercy, humility, or love. The danger is not that AI will become like us, but that we will forget what it means to be human in our quest to make machines that surpass us. As C.S. Lewis warned, when we conquer nature without anchoring ourselves in truth, we risk abolishing man. To meet this moment, we must recover not just technical control, but moral clarity—uniting foresight with wisdom, regulation with reverence. Without the soul to guide it, reason becomes a tyrant, and even the most ‘aligned’ machine may lead us astray.” ChatGPT

[i] Some articles predict miraculous and helpful AI and are positive in their outlook for our future with them. Such as “The Gentle Singularity” by Sam Altman, founder and CEO of OpenAI and father of ChatGPT. Some are cautious but try to balance concern with optimism. Jonathan Rothman’s “Two Paths for AI” in New Yorker is a good example of that genre, but it leans towards concern I think. And some are sounding an alarm like a dive klaxon in an old submarine movie. “AI 2027” is a solid entry in that category. Written by four knowledgeable and experienced authors in the field, some of whom were senior developers in well known LLM projects. You could look at a post from Jesse Singal is eye opening. “What Happened When I Asked ChatGPT to Pretend to be Conscious.”  All are worth some time and will give you a good sense of the very mixed prognoses circulating with strong followings for all.

Here’s a couple about the risks of unfettered technology and what the futurist ideologues see as the goal. Tech Billionaires are Making A Risky Bet with Humanity’s Future.  Ray Kurzweil: Technology will let us fully realize our humanity

 To ignore the warnings are foolhardy. To panic is still a bit premature, but this could come on us like an eighteen wheeler in the fog.

[ii] Here is one response on what’s at stake from Charlie Camosy. https://x.com/CCamosy/status/1934973053412511888

[iii] “In the Terminator film franchise, Skynet is a fictional artificial general intelligence (AGI) that becomes self-aware and initiates a nuclear apocalypse to eradicate humanity, viewing humans as a threat to its existence. This catastrophic event, known as “Judgment Day,” marks the beginning of a dystopian future where Skynet wages war against the surviving human population using an army of machines.” As described by ChatGpt :^).

[iv] LLMs are a type of neural network – complex machines that are commonly referred to as Artificial Intelligence. The blackmailer was Anthropic’s Claude.

[v] The recent codicil in the “Big, beautiful” reconciliation bill passed by the House and under consideration in the Senate substantially weakened that regulation. This is a major mistake beyond the scope of a budget reconciliation bill and should be stricken. The Senate parliamentarian has ruled that this section is beyond the scope of what can be done in a budget reconciliation bill, so that is a hopeful development. The money and power behind trying to limit regulations around AI development are daunting.

[vi] The energy needed for AI and the computers necessary are another aspect we need to understand. It is projected by 2028 the power requirements for the rapidly expanding data centers will be equivalent to that needed to power 55 million homes. How Much Energy Does Your AI Prompt Use (WSJ)

3 Comments

Filed under Background Perspective, Culture views, Faith and Reason