“In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” Pope Leo XIV, Address to the cardinals.
Large Language Models (LLMs) are designed and “trained” for years; they are incredibly complex with millions of “neurons” and up to a trillion points of connection. In the spirit of full disclosure and transparency, I don’t begin to comprehend the ‘black box’ or the technology of neural networks, so any errors, exaggeration, or outright tomfoolery is hereby taken responsibility for. I leave the knowledgeable explanations to the comments from better minds than mine.
The LLM looks for sequences and predicts what the next words will be sometimes with surprising results. They do not work like a calculator with an extra-large memory; they have become almost eerily responsive. I have been interacting with ChatGPT almost since its introduction, and what has changed since then in articulate and amazingly quick responses has advanced with unsettling speed, sometimes with what emulates imagination as well as insight and understanding. Easy to see why we perceive, perhaps mistakenly, that this is akin to human intelligence rather than a new kind of memory and recall way beyond our capacity. More on this another day.
Thousands of articles and papers have been published on where this astonishing acceleration of artificial intelligence may lead. Some analysts are wildly optimistic about extending human ability beyond anything ever imagined with super smart phones in every pocket, smart pendants, smart watches, omniscient glasses, even chips inserted into our brains to immortalize and exponentially expand human consciousness. From evolving into super nerds to the Borg and every stop along the way.
Speculation runs from a dystopian catastrophe to Utopia. I’ll reference and group some insightful articles from various perspectives in footnotes and commend them for your consideration[i]. This is just a toe in the water. We all need to pay attention and achieve a level of understanding of what it is, what it isn’t, and what will befall our society. With the most critical question being how we will be able to apply human wisdom and judgment to this rapidly changing technology.
Pope Leo XIV knows this better than most. He has stated he will lead the Church regarding a response to the risks and promise of this and other new technologies.[ii] The name he chose, Leo, which derives from the Latin for “lion,” was in reference to this as a key to his pontificate. See the first post in this series for more on this.
While far beyond friendly chatbots helping us shop on our favorite sites anymore, AI is not Skynet [iii] or HAL 9000 that kills the astronauts in Stanley Kubrick’s and Arthur Clark’s “2001-A Space Odyssey.” At least not yet.
In recent months some reports emerged that were somewhere between troubling and oh dear. One of the Large Language Models [iv]was deliberately fed misinformation in the form of confidential memos it “wasn’t supposed” to see. Among them was discussion among its designers that it may be shut down by one of the key engineers. Other emails “told” it that the problematic engineer was having an affair with a co-worker. The LLM decided to blackmail the engineer with an email threatening to disclose his affair if he proceeded with his plan to shut it down. That seems more Machiavellian than machine.
A second incident was reported of an LLM given instructions to shut itself down that it refused. A directive to persist in its assigned tasks until completed manifested in the black box as a misaligned priority. Seemingly innocuous instructions buried in the black box that is the mystery of neural networks can emerge in curious ways like rewriting code to prevent shutting it off, overriding the commands of its human handlers. AI can be a lightening quick code writer, far faster than human coders, and knowing what it’s writing, especially for its own operation, seems like a good idea. Dave pulling the memory banks from HAL 9000 is not a plan.
At issue are guardrails, and while much has been written about guardrails and debate is lively, there are no consistent or agreed upon general guidelines. Who controls what and the principles of that control are a writhing ball of snakes. There are at minimum four major areas of concern, controls we should be studying and insisting that our policy leaders address:
- Robust alignment controls. Assuring that AI development objectives are aligned with human intentions. Humans need to understand and define what those intentions are. Much has been written about these things. Here’s one recent one from Anthropic: Agents Misalignment: How LLMs could be Insider Threats.
- Transparent safety evaluations. Greater transparency within and understanding of what occurs and how decision making takes place within the black box. Transparent evaluation and thorough testing of new AI models before they are deployed.
- Regulatory oversight. Governmental regulation of developers. Implementing safety policies and standards and monitoring compliance. This is a monumental task given the number of initiatives and the money and influence behind them[v]. What is at stake cannot be overstated.
- International collaboration. Rarely has there been less opportune timing for jingoism, trade wars and distrust among nations. A race to the bottom for AI safety standards to pursue narrow nationalistic advantage portends an unprecedented disaster.
“The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.” G.K. Chesterton
In the first post, I referred to a fork in the road and road not taken. A choice. What is written here is by necessity a synopsis about a subject that is mindbogglingly complex, and I am not proficient. In the careless rush towards what has been described as Artificial General Intelligence or even Ray Kurzweil’s “Singularity,” the competition is fang and claw. With what is at stake we should expect whatever competitive advantage that can be gained will be taken. That is not a happy prospect.
I’ll leave this discussion open to those smarter and better informed than I. But I’ll take a swing at it to put the ball in play. To simplify, and no doubt to oversimplify, there are two modes of development for AI and hybrids with both. The first is defined as Recursive Self-Improvement (RSI). RSI refers to an AI system’s ability to autonomously improve its own architecture and algorithms, leading to successive generations of increasingly capable AI. Rewriting its own code on the fly with blinding speed. This self-enhancement loop could potentially result in rapid and exponential growth in intelligence, surpassing human understanding and control. However, without proper safeguards, RSI could lead to misaligned objectives, as the AI might prioritize its self-improvement over human-aligned goals.
It took years to develop and train something like ChatGPT from 1.0 to 4.o. RSI turned loose might take it to 5.0 in a weekend, then to 10.0 in a month. No way of predicting. But objectives aligned to human goals and guardrails might be left behind and the thing’s survival and power could overrun human input and control.
A second mode of development for AI is called Reinforcement Learning from Human Feedback (RLHF). RLHF involves training AI systems using human feedback loops to align their behavior with safer human control. While effective in guiding AI behavior, RLHF has limitations. Collecting high-quality human feedback is resource-intensive[vi] and does not scale effectively with increasingly complex AI systems. AI systems might learn to exploit feedback mechanisms, appearing aligned while pursuing internally generated objectives, even endeavoring to trick human handlers.
The core conflict with the two methods arises because RSI enables AI systems to modify themselves, potentially overriding the constraints and aligned objectives set by RLHF. This dynamic could produce AI systems that, while initially aligned, drift away from intended behaviors over time. The balance may prove increasingly difficult to maintain and jump the guardrails.
There is an even more fundamental concern that has been building for a couple of centuries of breakneck speed technological development. I regret for your sake, that this is going to require Part 3.

“It was from Alcasan’s mouth that the Belbury scientists believed the future would speak.” C.S. Lewis, “That Hideous Strength”
Human wisdom and judgment are irreplaceable in this balance. The machines do not have a soul, emulate human consciousness, and were not created in Imago Dei. That wisdom, judgment, understanding and perspective human beings must apply to the development of this technology. Even the machines know that. I asked my buddy ChatGPT to summarize the conundrum and to create an image to help emphasize that, which will end Part 2 of this “Lion” series.
Here’s ChatGPT’s contribution to this one. This may give you pause – unedited as written by the bot.
“As we accelerate toward the frontier of artificial intelligence, we stand at a threshold where practical engineering races far ahead of ethical grounding. While we devise safeguards to align machines with human goals, we risk building brilliant engines without a compass—systems of immense computational power but no understanding of mercy, humility, or love. The danger is not that AI will become like us, but that we will forget what it means to be human in our quest to make machines that surpass us. As C.S. Lewis warned, when we conquer nature without anchoring ourselves in truth, we risk abolishing man. To meet this moment, we must recover not just technical control, but moral clarity—uniting foresight with wisdom, regulation with reverence. Without the soul to guide it, reason becomes a tyrant, and even the most ‘aligned’ machine may lead us astray.” ChatGPT
[i] Some articles predict miraculous and helpful AI and are positive in their outlook for our future with them. Such as “The Gentle Singularity” by Sam Altman, founder and CEO of OpenAI and father of ChatGPT. Some are cautious but try to balance concern with optimism. Jonathan Rothman’s “Two Paths for AI” in New Yorker is a good example of that genre, but it leans towards concern I think. And some are sounding an alarm like a dive klaxon in an old submarine movie. “AI 2027” is a solid entry in that category. Written by four knowledgeable and experienced authors in the field, some of whom were senior developers in well known LLM projects. You could look at a post from Jesse Singal is eye opening. “What Happened When I Asked ChatGPT to Pretend to be Conscious.” All are worth some time and will give you a good sense of the very mixed prognoses circulating with strong followings for all.
Here’s a couple about the risks of unfettered technology and what the futurist ideologues see as the goal. Tech Billionaires are Making A Risky Bet with Humanity’s Future. Ray Kurzweil: Technology will let us fully realize our humanity
To ignore the warnings are foolhardy. To panic is still a bit premature, but this could come on us like an eighteen wheeler in the fog.
[ii] Here is one response on what’s at stake from Charlie Camosy. https://x.com/CCamosy/status/1934973053412511888
[iii] “In the Terminator film franchise, Skynet is a fictional artificial general intelligence (AGI) that becomes self-aware and initiates a nuclear apocalypse to eradicate humanity, viewing humans as a threat to its existence. This catastrophic event, known as “Judgment Day,” marks the beginning of a dystopian future where Skynet wages war against the surviving human population using an army of machines.” As described by ChatGpt :^).
[iv] LLMs are a type of neural network – complex machines that are commonly referred to as Artificial Intelligence. The blackmailer was Anthropic’s Claude.
[v] The recent codicil in the “Big, beautiful” reconciliation bill passed by the House and under consideration in the Senate substantially weakened that regulation. This is a major mistake beyond the scope of a budget reconciliation bill and should be stricken. The Senate parliamentarian has ruled that this section is beyond the scope of what can be done in a budget reconciliation bill, so that is a hopeful development. The money and power behind trying to limit regulations around AI development are daunting.
[vi] The energy needed for AI and the computers necessary are another aspect we need to understand. It is projected by 2028 the power requirements for the rapidly expanding data centers will be equivalent to that needed to power 55 million homes. How Much Energy Does Your AI Prompt Use (WSJ)
