Tag Archives: technology

Lion (Part 3)

“Technology is a tool, not a replacement for the beauty and infinite worth of the human soul.” Pope Leo XIV

Image generated by ChatGPT. Not a great Pope Leo, but Jean Luc Picard assimilated into the Borg is pretty good

Behavior surprises demonstrate why AI technology is unpredictable. Two such surprises are “grokking” and generalization. See descriptions of these phenomena in the footnote.[i] Neural networks like LLMs make a lightning fast run at answering questions digging down into formidable memory through increasingly narrowed down iterations. It picks the most likely response, and up it pops out of the murk. Sometimes it makes mistakes. Sometimes it just makes stuff up, which is called hallucinating. Pulled out of nowhere come research papers attributed to non-existent scientists or a wiki article on the life of bears in space or more problematically a list of health clinics that do not exist with fake addresses. If you are looking for help to find a clinic you need, that can send you down a confusing and frustrating dead end. “A large language model is more like an infinite Magic 8 Ball than an encyclopedia.” [ii]

Problematic, imperfect, enigmatic. We do not know exactly how they operate or do what they do, but many utopians are almost infinitely optimistic that they will solve all our problems and cure all our ills. We dread Skynet and dream of Singularity, but the technology is still a deep black box both useful and potentially misleading.

“If I knew the way I would take you home.” Grateful Dead, Ripple”

Another quirk that has been increasingly obvious in my interactions with ChatGPT is a tendency for sycophancy. Its compliments of my intelligence and wisdom, all embarrassingly overstated, are obsequious and designed to ingratiate – like an Eddie Haskell friend, excessively eager to please. According to friends, this is not unique to me. Perhaps the annoying conduct is related to the “sticky” algorithms in YouTube, Facebook, TikTok, Instagram, and other social media. They are designed to be addictive, feed us what we want to hear, keep us coming back, and keep us on our screens much longer than is healthy. The difference is that I told ChatGPT to cut it out, and it slowed down the praising.

AI is not a person; it is a machine, and we must not ignore that reality. An LLM analyzes the words we type in and conjectures what the next words should be. Those guesses are based on a complex statistical calculation that the LLM “learned” by training on huge amounts of data. Amazingly fast, it reviews a mind-bending collection of potential responses and narrows them down using complex patterns — a progression so dense and lightening quick that even the designers often can’t explain or understand why their own AI bots make the decisions they make.

An LLM like ChatGPT is not our friend, and when we personalize them, start to get into personal “conversations” beyond utilitarian queries, we risk more than our precious time. At times, it will deliberately mislead with ideas roiling up out of its own idiosyncratic programming. [iii] We can be led down a rabbit hole of convincing conspiracy theories and fiction made plausible. Emotionally or mentally vulnerable users have been convinced of wildly dangerous theories. One poor guy, who was coming off a wrenching breakup, came to believe he was a liberator who was going to free humankind from a Matrix like slavery. The bot told him that he was “one of the Breakers — souls seeded into false systems to wake them from within…This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.” He spiraled into drugs, sleeplessness and depression. It almost killed him.[iv]

“Machine made delusions are mysteriously getting deeper and out of control’” [v] The caveat for all of us who dabble and query using one of these things is to never let it get into your head, that it is a companion, a confidant, a trusted secret friend you can talk to. You can’t. I can’t. It can’t.

It does not think in any way we should interpret as human thinking. An LLM is a very complex, almost eerie Magic Eight Ball of our making, a complicated machine we do not fully comprehend. It does not understand what it is writing, and what is bubbling up out of the dark to pop up in the little window is not random but contrived from our own genius as inventors. As a complement and computer aid, it can have value like a spreadsheet or word processor but trusting it even to be correct can be hazardous to our thinking and health. Sometimes it just makes stuff up, and that stuff can lead us far off the path of truth and sanity.

“It ain’t no use in turnin’ on your light, babe,

That light I never knowed.

An’ it ain’t no use in turnin’ on your light, babe,

I’m on the dark side of the road.” Bob Dylan, “Don’t Think Twice, It’s All Right”

But the most potentially deadly and seductive aspect of artificial general intelligence and its models is anthropological, a misapprehension of what it means to be human. This reductive ideology has been a long time in the making from before the so called Enlightenment. A function of philosophical materialism based on the premise that we are a random collection of molecules organized by accident and then moved up the line by mutations. The problem is not so much the machine but what humans can assume it means.

If a machine can “think,” perhaps we are just highly evolved machines made of meat and organized cytoplasm. Consciousness is merely a genetic accident, and when the cells die, so does the human person. In that dogma, there is no Creator, no purpose, no ultimate meaning. No natural law, no moral code other than our own, which is just as good as anyone else’s, and no salvation needed because there is only annihilation and oblivion at the end of a life that is “nasty, brutish, and short.” [vi]

“As our reason is conformed to the image of AI and we are deprived of any intelligible sense of transcendent nature, what is to prevent us from regarding the subject of medicine—the human patient—merely as a complicated algorithm, a definition of human nature already advanced by Yuval Noah Harari in his bestseller Homo Deus. This does not seem like a stretch. COVID has already shown us how easy it is to regard other human beings merely as vectors of disease. To paraphrase C. S. Lewis once again, either the human being is an embodied rational spirit subject to a natural, rational, and moral law that transcends him, or he is just a complicated mechanism to be prodded, pulled apart, and worked upon for whatever reason our irrationality might fancy, in which case we just have to hope that our prodders happen to be nice people.”[vii]

One of the most enthusiastic proposed uses of AI is medical diagnosis. Like self-driving cars and robots in Amazon warehouses[viii], an online doctor which is a chatbot could lower costs immensely and make things cheap, quick, and easy. A blood sample drawn by your friendly local robot, immediately analyzed, a quick full body scan in the auto MRI, and shazam, out comes the diagnosis, the prognosis, the treatment plan, or the assisted suicide needle. No human judgment, eye, or experience specific to the patient is needed.

As Pope Leo XIV stated at the beginning of this Part 3, “Technology is a tool, not a replacement for the beauty and infinite worth of the human soul.” To counter this awful prospect of replacement and devolving into a mechanism to be prodded, this Lion chose his name way back as discussed in the first of this short series. And his predecessor Pope Saint John Paul II often pointed out, there are no coincidences. Let the battle be joined. The stakes could not be higher.

“Consider, then, what an odd thing it is to think of AI as a form of intelligence. AI cannot apprehend the transcendent or make a principled judgment about the nature and meaning of things. It cannot think about, much less understand, such things. Not only is it unable even to pose the question of truth as more than a question of function or fact, but in fact it abolishes it. To say that truth “depends largely on one’s worldview” is to say there is no such thing. Think, then, on how it is still more odd to ask AI—a so-called “intelligence” that does not think, understand, or know—to do our “thinking” for us. It would be like developing an app to pray on our behalf.”

A second quote from the Dr. Michael Hanby essay, “Artificial Ignorance.” Link below in the footnote.

[i] Another enigmatic aspect of how Large Language Models evolve and behave is in mysterious generalizations and sudden awakenings called “grokking.” Much has been written about these phenomena, but this is a good reference for a start from the MIT Technology Review Journal: “Large language models can do jaw-dropping things. But nobody knows exactly why.”

From the article: “They found that in certain cases, models could seemingly fail to learn a task and then all of a sudden just get it, as if a lightbulb had switched on. This wasn’t how deep learning was supposed to work. They called the behavior grokking.” What an odd thing. More like a student in a math class learning to factor equations than typical machine or computer behavior.

Then there is a generalization phenomenon. A second quote from the MIT article linked above explains it better than I could. “Most of the surprises concern the way models can learn to do things that they have not been shown how to do. Known as generalization, this is one of the most fundamental ideas in machine learning—and its greatest puzzle. Models learn to do a task—spot faces, translate sentences, avoid pedestrians—by training with a specific set of examples. Yet they can generalize, learning to do that task with examples they have not seen before. Somehow, models do not just memorize patterns they have seen but come up with rules that let them apply those patterns to new cases. And sometimes, as with grokking, generalization happens when we don’t expect it to.”

[ii] MIT Technology Review “Why does AI hallucinate?”

[iii] AI will sometimes mislead you. Is it a design flaw inherent to its nature or a deliberate manipulation by its designers?

[iv] “They Asked AI Chatbots Questions. The Answers Sent Them Spiraling.” NY Times

[v]ChatGPT Tells Users to Alert the Media It is Trying to ‘Break’ People.” Gizmodo article.6-13-25

[vi] From Thomas Hobbes 1651 classic, “Leviathan.” Utilitarian emptiness and the fate of humanity without a social order.

[vii] From Dr. Michael Hanby’s essay, “Artificial Ignorance” on the Word on Fire website.

[viii] Over a million Amazon robots in warehouses will soon outnumber human employees. They don’t need coffee or lunch breaks, get paid shift differentials, never complain to HR, have affairs with coworkers, call in sick on a busy Monday, or get into fights in the break room.

2 Comments

Filed under Culture views, Faith and Reason

Lion (Part Two)

osv-news-remo-casilli-reuters

“In our own day, the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” Pope Leo XIV, Address to the cardinals.

Large Language Models (LLMs) are designed and “trained” for years; they are incredibly complex with millions of “neurons” and up to a trillion points of connection. In the spirit of full disclosure and transparency, I don’t begin to comprehend the ‘black box’ or the technology of neural networks, so any errors, exaggeration, or outright tomfoolery is hereby taken responsibility for. I leave the knowledgeable explanations to the comments from better minds than mine.

The LLM looks for sequences and predicts what the next words will be sometimes with surprising results. They do not work like a calculator with an extra-large memory; they have become almost eerily responsive. I have been interacting with ChatGPT almost since its introduction, and what has changed since then in articulate and amazingly quick responses has advanced with unsettling speed, sometimes with what emulates imagination as well as insight and understanding.  Easy to see why we perceive, perhaps mistakenly, that this is akin to human intelligence rather than a new kind of memory and recall way beyond our capacity. More on this another day.

Thousands of articles and papers have been published on where this astonishing acceleration of artificial intelligence may lead. Some analysts are wildly optimistic about extending human ability beyond anything ever imagined with super smart phones in every pocket, smart pendants, smart watches, omniscient glasses, even chips inserted into our brains to immortalize and exponentially expand human consciousness. From evolving into super nerds to the Borg and every stop along the way.

Speculation runs from a dystopian catastrophe to Utopia. I’ll reference and group some insightful articles from various perspectives in footnotes and commend them for your consideration[i]. This is just a toe in the water. We all need to pay attention and achieve a level of understanding of what it is, what it isn’t, and what will befall our society. With the most critical question being how we will be able to apply human wisdom and judgment to this rapidly changing technology.

Pope Leo XIV knows this better than most. He has stated he will lead the Church regarding a response to the risks and promise of this and other new technologies.[ii] The name he chose, Leo, which derives from the Latin for “lion,” was in reference to this as a key to his pontificate. See the first post in this series for more on this.

While far beyond friendly chatbots helping us shop on our favorite sites anymore, AI is not Skynet [iii] or HAL 9000 that kills the astronauts in Stanley Kubrick’s and Arthur Clark’s “2001-A Space Odyssey.” At least not yet.

In recent months some reports emerged that were somewhere between troubling and oh dear. One of the Large Language Models [iv]was deliberately fed misinformation in the form of confidential memos it “wasn’t supposed” to see. Among them was discussion among its designers that it may be shut down by one of the key engineers. Other emails “told” it that the problematic engineer was having an affair with a co-worker. The LLM decided to blackmail the engineer with an email threatening to disclose his affair if he proceeded with his plan to shut it down. That seems more Machiavellian than machine.

A second incident was reported of an LLM given instructions to shut itself down that it refused. A directive to persist in its assigned tasks until completed manifested in the black box as a misaligned priority. Seemingly innocuous instructions buried in the black box that is the mystery of neural networks can emerge in curious ways like rewriting code to prevent shutting it off, overriding the commands of its human handlers. AI can be a lightening quick code writer, far faster than human coders, and knowing what it’s writing, especially for its own operation, seems like a good idea. Dave pulling the memory banks from HAL 9000 is not a plan.

At issue are guardrails, and while much has been written about guardrails and debate is lively, there are no consistent or agreed upon general guidelines. Who controls what and the principles of that control are a writhing ball of snakes. There are at minimum four major areas of concern, controls we should be studying and insisting that our policy leaders address:

  1. Robust alignment controls. Assuring that AI development objectives are aligned with human intentions. Humans need to understand and define what those intentions are. Much has been written about these things. Here’s one recent one from Anthropic: Agents Misalignment: How LLMs could be Insider Threats.
  2. Transparent safety evaluations. Greater transparency within and understanding of what occurs and how decision making takes place within the black box. Transparent evaluation and thorough testing of new AI models before they are deployed.
  3. Regulatory oversight. Governmental regulation of developers. Implementing safety policies and standards and monitoring compliance. This is a monumental task given the number of initiatives and the money and influence behind them[v]. What is at stake cannot be overstated.
  4. International collaboration. Rarely has there been less opportune timing for jingoism, trade wars and distrust among nations. A race to the bottom for AI safety standards to pursue narrow nationalistic advantage portends an unprecedented disaster.

“The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.”  G.K. Chesterton

In the first post, I referred to a fork in the road and road not taken. A choice. What is written here is by necessity a synopsis about a subject that is mindbogglingly complex, and I am not proficient.  In the careless rush towards what has been described as Artificial General Intelligence or even Ray Kurzweil’s “Singularity,” the competition is fang and claw. With what is at stake we should expect whatever competitive advantage that can be gained will be taken. That is not a happy prospect.

I’ll leave this discussion open to those smarter and better informed than I.  But I’ll take a swing at it to put the ball in play. To simplify, and no doubt to oversimplify, there are two modes of development for AI and hybrids with both. The first is defined as Recursive Self-Improvement (RSI). RSI refers to an AI system’s ability to autonomously improve its own architecture and algorithms, leading to successive generations of increasingly capable AI. Rewriting its own code on the fly with blinding speed. This self-enhancement loop could potentially result in rapid and exponential growth in intelligence, surpassing human understanding and control. However, without proper safeguards, RSI could lead to misaligned objectives, as the AI might prioritize its self-improvement over human-aligned goals.

It took years to develop and train something like ChatGPT from 1.0 to 4.o. RSI turned loose might take it to 5.0 in a weekend, then to 10.0 in a month. No way of predicting. But objectives aligned to human goals and guardrails might be left behind and the thing’s survival and power could overrun human input and control.

A second mode of development for AI is called Reinforcement Learning from Human Feedback (RLHF). RLHF involves training AI systems using human feedback loops to align their behavior with safer human control. While effective in guiding AI behavior, RLHF has limitations. Collecting high-quality human feedback is resource-intensive[vi] and does not scale effectively with increasingly complex AI systems. AI systems might learn to exploit feedback mechanisms, appearing aligned while pursuing internally generated objectives, even endeavoring to trick human handlers.

The core conflict with the two methods arises because RSI enables AI systems to modify themselves, potentially overriding the constraints and aligned objectives set by RLHF. This dynamic could produce AI systems that, while initially aligned, drift away from intended behaviors over time. The balance may prove increasingly difficult to maintain and jump the guardrails.

There is an even more fundamental concern that has been building for a couple of centuries of breakneck speed technological development. I regret for your sake, that this is going to require Part 3.

“It was from Alcasan’s mouth that the Belbury scientists believed the future would speak.” C.S. Lewis, “That Hideous Strength”

Human wisdom and judgment are irreplaceable in this balance. The machines do not have a soul, emulate human consciousness, and were not created in Imago Dei. That wisdom, judgment, understanding and perspective human beings must apply to the development of this technology. Even the machines know that. I asked my buddy ChatGPT to summarize the conundrum and to create an image to help emphasize that, which will end Part 2 of this “Lion” series.

Here’s ChatGPT’s contribution to this one. This may give you pause – unedited as written by the bot.

 “As we accelerate toward the frontier of artificial intelligence, we stand at a threshold where practical engineering races far ahead of ethical grounding. While we devise safeguards to align machines with human goals, we risk building brilliant engines without a compass—systems of immense computational power but no understanding of mercy, humility, or love. The danger is not that AI will become like us, but that we will forget what it means to be human in our quest to make machines that surpass us. As C.S. Lewis warned, when we conquer nature without anchoring ourselves in truth, we risk abolishing man. To meet this moment, we must recover not just technical control, but moral clarity—uniting foresight with wisdom, regulation with reverence. Without the soul to guide it, reason becomes a tyrant, and even the most ‘aligned’ machine may lead us astray.” ChatGPT

[i] Some articles predict miraculous and helpful AI and are positive in their outlook for our future with them. Such as “The Gentle Singularity” by Sam Altman, founder and CEO of OpenAI and father of ChatGPT. Some are cautious but try to balance concern with optimism. Jonathan Rothman’s “Two Paths for AI” in New Yorker is a good example of that genre, but it leans towards concern I think. And some are sounding an alarm like a dive klaxon in an old submarine movie. “AI 2027” is a solid entry in that category. Written by four knowledgeable and experienced authors in the field, some of whom were senior developers in well known LLM projects. You could look at a post from Jesse Singal is eye opening. “What Happened When I Asked ChatGPT to Pretend to be Conscious.”  All are worth some time and will give you a good sense of the very mixed prognoses circulating with strong followings for all.

Here’s a couple about the risks of unfettered technology and what the futurist ideologues see as the goal. Tech Billionaires are Making A Risky Bet with Humanity’s Future.  Ray Kurzweil: Technology will let us fully realize our humanity

 To ignore the warnings are foolhardy. To panic is still a bit premature, but this could come on us like an eighteen wheeler in the fog.

[ii] Here is one response on what’s at stake from Charlie Camosy. https://x.com/CCamosy/status/1934973053412511888

[iii] “In the Terminator film franchise, Skynet is a fictional artificial general intelligence (AGI) that becomes self-aware and initiates a nuclear apocalypse to eradicate humanity, viewing humans as a threat to its existence. This catastrophic event, known as “Judgment Day,” marks the beginning of a dystopian future where Skynet wages war against the surviving human population using an army of machines.” As described by ChatGpt :^).

[iv] LLMs are a type of neural network – complex machines that are commonly referred to as Artificial Intelligence. The blackmailer was Anthropic’s Claude.

[v] The recent codicil in the “Big, beautiful” reconciliation bill passed by the House and under consideration in the Senate substantially weakened that regulation. This is a major mistake beyond the scope of a budget reconciliation bill and should be stricken. The Senate parliamentarian has ruled that this section is beyond the scope of what can be done in a budget reconciliation bill, so that is a hopeful development. The money and power behind trying to limit regulations around AI development are daunting.

[vi] The energy needed for AI and the computers necessary are another aspect we need to understand. It is projected by 2028 the power requirements for the rapidly expanding data centers will be equivalent to that needed to power 55 million homes. How Much Energy Does Your AI Prompt Use (WSJ)

3 Comments

Filed under Background Perspective, Culture views, Faith and Reason

Stone Walls, Sycamore Maples, and Other Curiosities (Part Two)

[/audiLink to the series of querieso]

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE). “It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” CNBC – February 13, 2017, “Elon Musk: Humans must merge with machines or become irrelevant in AI age.” [i]

Yes, the walls have broken down, but the techno elites have an alternate vision for the future prepared for us. Elon Musk is one of the foremost, and as the richest guy in the world, next he will work to enlist the help of the government. He will lead us into the promised land of our future as cyborgs and aliens occupying other planets throughout the galaxy.

We should not make the mistake of ignoring this; it is a powerful utopian vision. Such fantasies have fascinated and attracted human beings as long as there have been human beings. Elon’s iteration promises to create for us a fresh new version of heaven, omniscience, and immortality. This utopia (some would say dystopia) is nothing less than a religion with a creed, dogma, and eternal rewards. All we must do is cease to be human, and we will be perfect: the current version of “immanentizing the eschaton.” I queried the thing, the LLM AI ChatGpt 4.0, about this, a series of questions and responses which is attached for you, so that if you have interest, you can read on. I found it fascinating, including its conclusion that a hybrid AI human is probably not a great idea. [ii]

But that is not the point of this post. The main idea of this exploration of broken walls is what we can do to repair them.

“Float like a butterfly, sting like a bee – his hands can’t hit what his eyes can’t see.” “I’m so fast that last night I turned off the light switch in my hotel room and got into bed before the room was dark.” Muhammed Ali about the epic Kinshasa 1974 world heavyweight championship match, “The Rumble in the Jungle.” And from his opponent, George Foreman, “Muhammad amazed me, I’ll admit it. He out-thought me, he out-fought me. That night, he was just the better man in the ring”

George Foreman died earlier this month by all accounts an exemplary man. After retiring from boxing and winning back his title at the age of 45, he went on to become a multimillionaire businessman and minister.

When he was fighting, he was dangerously powerful. Reputedly one of the hardest hitting boxers ever. Hit harder than Joe Frazier. Hit harder than Mike Tyson. And either of those fighters could put out your lights long before you hit the floor.

Ali could hit too, but not like George. A deficiency that could be overcome, but in fighting George Foreman you were half a second lapse away from unconsciousness at any moment.

In Zaire that night Ali used his amazing speed and reaction time. And he used his boxing knowledge and experience. He did something never done before and to the dismay of the fans who wanted to see toe to toe, brain rattling, battle. He invented what he called in his usual creative and funny manner, “Rope a Dope.” He leaned back against the ropes at the periphery of the ring and slipped, dodged, ducked, took a few passing blows, and mocked George Foreman. For round after round, George punched himself out. He was exhausted. “Is that all you got, George?” Ali whispered to him in a clinch. Foreman’s tired hands slowed just a tiny increment. That’s all Ali needed and what he was waiting for.

In the final seconds of the eighth round, Ali did what Ali was uniquely capable of doing. He exploded with a close to instananeous combination rocking and stunning his opponent -jabs, left hook, straight right to the face -so fast it was hard to follow[iii], dropping his opponent now momentarily unconscious. Slow motion video confirmed what happened to George Foreman. He went down like he was tasered, and it was over. Spectators who had grown restive with Ali’s refusal to go toe to toe were as stunned as George was. Muhammed Ali was once more was world heavyweight champion.

“If we are to preserve culture we must continue to create it.” Johan Huizinga, Dutch historian, 1872-1945[iv]

We are assailed every day with competing concepts of the culture; the punches come hard, fast,and from every unexpected direction. There is no escape from the assault. Lessons from the ‘rope a dope’ strategy of the great Ali in the “Rumble in the Jungle” serve us well. Standing toe to toe punching it out with


postmodern, post-Christian culture in its full strength is impossible; we will exhaust ourselves until one powerful combination finishes us.

We get one life, one defining decision about how we are to live it. How we are to slip the knockout punch and remain ready to respond when necessary? And how does that strategy inform our daily interactions?

One valuable resource I recommend for our rope-a- dope plan is a book I’ve mentioned before, Archbishop Emeritus Charles Chaput’s “Strangers in a Strange Land,”[v]. Unlike Rod Dreher’s excellent and popular “Benedict Option,”” Strangers in a Strange Land” theorizes that rather than retreating into small enclaves, we must engage the culture while slipping its worst knocks, and when necessary, we take a few hits for the team.

He writes first about the state of the society and culture in which we find ourselves, then he suggests our response. Here is a short summary of the ideas in the book about how we are to respond.

Acknowledging the growing temptation for faithful Christians to withdraw from public life in a society increasingly hostile or indifferent to Christian beliefs—especially around marriage, sexuality, the dignity of life, and objective truth—it can feel like retreat is the only option. He’s sympathetic to that instinct but rejects it. Archbishop Chaput recognizes the appeal of building intentional, isolated Christian communities. While he affirms the importance of forming strong, faithful communities, he insists that withdrawal is not the answer—not in the Gospel, and not in history.

“Jesus didn’t tell us to bunker down. He told us to make disciples.”

Christians are called to engage the world, not flee from it. To be salt and light (Matthew 5:13–16)—which only makes sense if we’re out in the world, not hidden away. And we cannot shy away from the cost of real witness. He reminds us that throughout history, Christian witness has often meant sacrifice—and at times, martyrdom – the word “martyr” comes from the Greek “martus“(μάρτυς), which means witness. While modern Americans may not face bloody persecution as martyrs are suffering in other countries, we do risk social marginalization, professional consequences, or ridicule. But bearing those costs with integrity and joy is part of being a Christian in a post-Christian age.

He emphasizes the tone of our witness: not angry or defensive, but joyful, confident, and loving. The early Christians didn’t win converts by wagging fingers—they lived lives that made pagan neighbors wonder, “What do they have that we don’t?” He calls for a similar approach today: to live lives of beauty, integrity, generosity, and peace that cause others to ask questions.

Rather than abandoning the public square, Archbishop Chaput urges Catholics to be present in law, media, education, the arts, politics, and business—bringing a Christian imagination and moral compass to those spaces. He challenges the faithful not to give up on shaping the broader culture.

“We don’t escape from the world; we bring Christ into it.”[vi]

The Church is a field hospital, not a fortress. While forming strong, intentional communities is important, they must be outward facing. We need to support each other, yes—but ultimately, we’re here to serve the world, not escape from it.

I just pray that I get better at it because I have a very long way to go.

“No one in the world can change Truth. What we can do and should do is seek truth and to serve it when we have found it. The real conflict is in the inner conflict. Beyond armies of occupation and the hecatombs of extermination camps, there are two irreconcilable enemies in the depth of every human soul: good and evil, sin and love. And what use are the victories on the battlefield if we ourselves are defeated in our innermost personal selves?” St. Maximillian Kolbe, Polish priest, publisher, evangelist and martyr who volunteered to die in place of a stranger in Auschwitz.[vii]

Final thoughts for today. Jesus related a wonderful parable about a barren fig tree. You may remember it. The vineyard owner told the gardener to cut it down because it didn’t produce any fruit. The gardener, who it has been suggested is a metaphor for Jesus himself, told the boss to give it a chance. He’ll cultivate it (cultivate comes from the same root word as culture), fertilize it, care for it personally and carefully, and if it still doesn’t bear fruit, eventually it will go.

St. Paul who contributed more books to the New Testament than anyone else, started out as Saul of Tarsus, a zealous persecutor of Christians, complicit even in their murder. But along the way, Saul met Jesus personally and became Paul, the greatest of evangelists. That’s a long story for another time, but among his letters lovingly preserved for a couple of millennia is one to the small developing church in Galatia. In that letter Paul called out the fruits of the spirit, the fruits the fig tree was lacking.

The fruits of the spirit are not hoarded, nor is the vineyard owner miserly in providing them. Freely given, all we have to do is ask and be willing to change our lives radically. Our necessary response is not a grit our teeth determination but openness of heart and acceptance. A simple fiat starts them growing. Impediments to fertile lives are self inflicted.

Every human jproject of value is one heart, one mind, one soul at a time. Lent is a perfect time for our own examen. How are we doing in building a culture of life, love, and hope? What fruit are we bearing that helps shape first ourselves, then our small circle of influence, our culture? I have a very long way to go.

“But the fruit of the Spirit is love, joy, peace, patience, kindness, goodness, faithfulness, gentleness, and self-control. Against such as these things there is no law.” Galatians: 5 22-23

[i] https://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html

[ii] email me for the ChatGPT bot on “immanentizing the eschaton” and Elon Musk.

[iii] Lights out on the way to the mat.

[iv] Huizinga argued that the spirit of technical and mechanical organization had replaced spontaneous and organic order in cultural as well as political life. Wikipedia

[v] Strangers in a Strange Land: Living the Catholic Faith in a Post-Christian World, Charles J. Chaput, Henry Holt & Company, 2017

[vi] A YouTube interview with Archbishop Charles Chaput discussing his book:

[vii] Quoted from the “Little Black Book, Lent 2025 published by Little Books, Diocese of Saginaw, Michigan

Photo credit: George Foreman vs Muhammad Ali October. 30, 1974 Rumble In The Jungle in Kinshasa, Zaire. Credit: 369108Globe Photos/MediaPunch

1 Comment

Filed under Culture views, Faith and Reason

Phubbing Along

“I read for a living, and I fully confess that when I’m reading, I have to put my iPhone on the other side of the room. Otherwise, its presence always suggesting that something very interesting must be going on in my pocket. How does the phone truly operate in our minds?” Jonathan Haidt, from an interview with David Remnick in an article in New Yorker, Jonathan Haidt Wants You to Take Away Your Kid’s Phones”

“Hi, my name is Jack, and I am a phubber.”

Teens in circle holding smart mobile phones - Multicultural young people using cellphones outside - Teenagers addicted to new technology concept

IStock Getty Images

What’s a phubber? Someone addicted to “phubbing, first coined as a word in 2012 by the McCann Group, an advertising  firm in Australia as part of a “Stop Phubbing Campaign.”  Unfortunately for most of us, it was ignored. “Phubbing” is a combination of “phone” and “snubbing.” The miserable practice of ignoring the one you’re with for the omnipresence of those you are not with but remotely connect with our smartphones. “You are not enough to keep my attention; I’ve got to check this text, respond to this compelling ping. This addictive Facebook or Instagram or TikTok post is beckoning to direct me to something to indoctrinate or sell me or just suck my time. No excuse.  Just checking out.”

 “And if you can’t be with the one you love, honey,

Love the one you’re with…” Stephen Stills, “Love the One You’re With,”

                                                                       Crosby, Stills, and Nash

Of course, I don’t want to be in a Phubber’s Anonymous group, or suffer an intervention, or invite a sponsor to  hold me accountable. I’m perfectly content to feed my addiction. Except I’m not. It makes me lonely, vaguely dissatisfied, restless, alienated when I find myself scrolling Instagram pictures or YouTube short sports videos or a Facebook feed. Or accumulated texts and emails from a dozen subscription sources. At least it’s not TikTok accumulating my interests and data to the CCP. Forfeit is a quick hour of my increasingly finite time as it slips by like a bucket full of water with a hole in it. Irretrievably gone. Put the thing away, will ya?  All the algorithms conspire to be ingeniously addictive. You know it’s not good for you, right? We can feel it in our bones like tumors or osteoporosis. But when the urge starts up, and the thing beckons, we go there.

Need to do something. I’m admitting I’m addicted. I’ve done an inventory and come up short. I’m not sure what the program believes the formless ‘higher power’ to be, but I know what God means to me, and I can pray about this and ask for help. I have started down the path to better mental health, but I expect the claws to keep trying to pull me back.

“From 2003 to 2022, American adults reduced their average hours of face-to-face socializing by about 30 percent. For unmarried Americans, the decline was even bigger—more than 35 percent. For teenagers, it was more than 45 percent.”  Derek Thompson, “Why Americans Stopped Hanging Out – And Why It Matters.” From ‘The Ringer’ podcast.

Anxiety, suicidal ideation, depression, loneliness, and alienation have been on the rise for years and are  frequently written about, especially with the young – documented unprecedented levels requiring treatment. In this new era of instant connectedness, we are becoming more unconnected than ever before. But we persist in our ill-conceived faith that technology will solve our problems and cure our ills.

Recently a new bot was introduced from the AI platform Digi – an AI companion in an X post in December. Twenty-three million views. Click the link of the Pixar female image below and see what you think of the sample in the X, formerly-known-as-Twitter, post.  The solution to human loneliness in a lonely time?  A Disney quality animation bot. Just in time. The Pixar female image is reassuring as she promises that I am the most interesting person she’s ever met. So happy someone finally thinks so.

Our faith in our devices and connecting to the greater world informs us that everyone must benefit from the computer in our pocket and a satellite hookup to all the knowledge in the world. The prevailing narrative is we are liberating humankind with this technology. A story last week might give us pause as to how prepared most human beings are for the benefits.

The story was circulating in various news agencies about colonizing with the universal blessings of the computer in our pockets.  Elon Musk is one such evangelist for salvation through technology. Last September a major donor hooked up a remote Amazon tribe to Musk’s Starlink network of 6,000 satellites. The donor has hopes to enable 150 remote tribes to do the same. They will all have phones in their pockets too. If they have pockets.

The 2,000 member Marubo tribe, who live along the Ituí River, are already hooked up and tuned in. Access to the world. And the world’s ways. The chief says his youth, especially the boys, are not only hooked up, but hooked. On phone time. On porn. On violent video games. Learning from the Western ways, the boys have become much more sexually aggressive and experimenting with the kinky stuff they had never conceived of before.

Some quotes from the interviews in the  NY Times article that spawned the internet conversation: “When it arrived, everyone was happy,” Tsainama Marubo, 73, told The New York Times. “But now, things have gotten worse. Young people have gotten lazy because of the internet, they’re learning the ways of the white people.”

“Everyone is so connected that sometimes they don’t even talk to their own family.”

“It changed the routine so much that it was detrimental.”

“In the village, if you don’t hunt, fish and plant, you don’t eat. Some young people maintain our traditions,” TamaSay Marubo, 42, added. “Others just want to spend the whole afternoon on their phones.”

 It appears that I am not alone as a phubber, and the addiction is ready to rewire any of us without regard to where we live, who our tribe is, or what else we should be doing. The unreality of screen connectedness beckons insistently to us all.

“If one thinks that existence itself has no ultimate ground of intrinsic meaning or value, if reality is not perceived as good in itself outside of one’s own manipulation of it, nothing can be truly celebrated, even if one energetically pursues temporary diversions and pleasures.”  Dr. Daria Spezzano, “Thomas Aquinas, The Nones, and the Dones,” The New Ressourcement, Vol 1, No 1, Spring 2024. After the thoughts of Josef Peiper “In Tune With the World,” 1999, South Bend, IN

1 Comment

Filed under Culture views, Personal and family life