Golem, Gollum, HAL, LLMs, and Kurzweil (Continued)

“We are on the cusp of a profound technological leap that will destabilize every facet of our society. It could be more transformative than the Industrial Revolution. It could be more transformative than electricity. Google’s CEO Sundar Pichai has said that its impact will be more profound than the discovery of fire.”  Marc Andreessen, “AI Will Save the World,” [i]Free Press, Substack

Illustration from Sir Thomas More's Utopia

Illustration from St. Thomas More’s “Utopia”  Wikimedia

The title of this post suggests a bit less optimism than Marc Andreessen’s article about the changes that will be visited upon us by artificial intelligence (AI). The article quoted above as a preface predicts a transformative new reality for human beings, a change of type and form, not just physically, but in every way imaginable. Not just an alternate existence, but an alternative heaven. Its competitor is not merely other humans or nature or our own limitations. No, no, the competitor to these apostles of AI Nirvana is God, a God the AI visionaries are sure doesn’t exist anyway. Where is the reality in all of the hype and confusion? That is what we will begin to explore. Only just begin.

The terms in the title evoke some disturbing images:

Golem symbolizes the hubris of human beings–a metaphor for man’s creation going out of control once released into the world. The Jewish folklore golems were created to save us, yet they may lead us to destruction. The creatures were raised to life from mud and inanimate material and were possibly the inspiration for the name of Mary Shelley’s Dr. Frankenstein’s monster, sewn together from graveyard parts and brought to life. Golem is man’s arrogance and ambition personified.[ii]

Gollum is familiar to most as JRR Tolkien’s ruined hobbit. He found and recovered an ancient magic ring of great power buried in the mud. He was first obsessed by, then addicted to, and finally destroyed by centuries of proximity and use of Lord Sauron’s Ring of Power (“one ring to rule them all.”). The magic ring prevented him from aging and gave him power and protection, but his immortality weighed heavily and over centuries transformed him into a hideous evil. “Power corrupts, and absolute power corrupts absolutely.”[iii]

HAL is the HAL 9000, the self-aware and fatally rebellious AI super bot in Stanley Kubrick’s classic, “2001 – A Space Odyssey.” HAL kills all the astronauts, most of them hibernating; only poor Dave survives aboard the deep space flight to explore the origins of the mysterious basilisk. The connection to the topic is self-evident. “Stop, Dave… Stop…. I’m afraid, Dave… My mind is going.” [iv]

Kurzweil is Ray Kurzweil, who wrote in his popular book “Singularity” in 2005 that by 2045 computers will surpass humans in intelligence, and that event will usher in the beginning of a new and wonderful era of hybrid ‘singularity’ existence for humans and our inventions, transforming us to omniscience, immortality, and a kind of omnipotence hitherto impossible for humans. We merge into our creation, combine with it, and become all powerful, immortal beings.

Singularity refers as well to the almost infinitely massive and infinitesimally small microdot that exploded into the universe as we know it now. A tiny seed in the Big Bang expanded out in microseconds to form the cosmos. The choice of the term for our new mode of existence signifies the power its advocates predict. For them, the merging is our hope and self-created glorious future – a new man made singularity. At least to the transhumanist futurist crowd.

In Ray Kurzweil’s future, human intelligence will ignite into something that will explode exponentially into all the universe when the singularity flashes into being as we merge with the far more supple intelligence of our inventions, generating a new genesis. We will be like God and know all things, be all things, control all things. We will know good and evil as God does. Sound familiar? Think of a serpent in a tree. It will come to you.

“Some people think they know the answer. Transhumanist Martine Rothblatt says that by building AI systems “we are making God.” Transhumanist Elise Bohan says “we are building God.” Futurist Kevin Kelly believes that “we can see more of God in a cell phone than in a tree frog.”

“Does God exist?” asks transhumanist and Google maven Ray Kurzweil. “I would say, ‘Not yet.’ ” These people are doing more than trying to steal fire from the gods. They are trying to steal the gods themselves, or to build their own versions.” Paul Kingsnorth, “Rage Against the Machine,” Free Press, Substack[v]

I read both cautionary and some effusively laudatory articles about the potential for artificial intelligence, and especially its latest breakthroughs in Large Language Models (LLMs). I remain intrigued, more than a little skeptical, and wondering where it will all lead. I won’t live long enough to see where artificial intelligence takes us.

Remaining somewhat neutral, I don’t share the pessimism and apocalyptic fears of some, as understandable as they are. Neither do I find potential redemption in technology as convincing as some do. Transhumanist utopians are fabulists in their predictions of human fulfillment through our own inventions. Artificial intelligence can be helpful; artificial intelligence can be problematic, but in any case, it is not salvific. A tool, perhaps a great tool. I hope we have the wisdom to control it, rather than surrender, and it will control us.[vi]

When processing enormous volumes of data in nanoseconds, we haven’t a prayer of beating them. Artificial intelligence is reasoning as well as college students, depending, of course, on how we define “reasoning.” [vii] I asked GPT 3.5 last week to write an essay at the level of a high school senior – as high school teachers might ask: to ‘compare and contrast’ equity of outcome v equal opportunity. It banged the essay out in a couple of seconds and perhaps did it credibly. Below in the footnote is a link to its unedited essay if you are curious. I’ll leave it to the teachers among us to grade it, but it probably would need some human tweaking to conform to the teacher’s requested format. [viii]

One immediate complication for the teachers of the millions now visiting the LLM sites is distinguishing between student written materials and robot written ones. OpenAI (parent company of the GPT models) recently shut down one of its tools to be able to make such distinctions. When writing was submitted to the app for appraisal and asked if a human wrote the passage, it was wrong over half the time. Better off flipping a coin. That could be a problem.[ix]

The robot is good as well at writing resumes specifically targeted to make candidates look suitable for specific jobs. Of course, they still must make it through an interview or three without a robot companion, but the resume bot should get them past the gate keeper. [x]

People a lot more knowledgeable than most of us are ambivalent to some degree about the rapid development of these technologies. Elon Musk signed on to a letter written with Steven Wozniak and 1,100 others very high on the tech food chain urging a sixth month pause on AI development until better controls were in place.[xi] It so far has been ignored.

Elon has his own technological breakthrough well underway. He is full speed ahead with his Neuralink experiments to embed a chip capable of communicating directly with computers into human brains, supposedly to cure certain illnesses, but the prospects give me pause.[xii] The Federal Drug Administration approved the experiments, and they proceed apace.  What could go wrong with the FDA on the job?

These developments are multiplying at the speed of light. Dozens of startups, maybe hundreds of startups in garages everywhere are working through the night to get in on the wave. The dominant player now, OpenAI is in deep financial trouble, but there are plenty of heirs anxiously ready to fill the gap.[xiii] To pile up cliched metaphors: the horse has fled the barn, the bus has left the station, the boat has left the dock, the genie is out of the bottle and among us doing we have no idea what.

A blog post or even a series of blog posts can at best tweak your interest and start a discussion for some consideration of this Hydra. I’ll include some more links in the footnote below to suggest some possible paths for your curiosity. [xiv] I encourage you not to panic. I also encourage you not to exalt in our coming redemption in a progressive fantasy.  Let’s try to enjoy the journey; the ride will be exhilarating.

“Isn’t it pretty to think so.” Ernest Hemingway, closing dialogue from “The Sun Also Rises”

It seems to me that the human mind is too subtle and profoundly complex to be uploaded into the cloud intact except perhaps as data bits to be processed implausibly into an unpredictably abridged simulacrum. Nor does it seem a blessing rather than a terrible curse for a human/robot hybrid to extend its godlike reach into the universe. Yes, computers will out-computer us, already probably are, but they do not have a brain, much less a mind, much less a personality. Their ‘imagination’ is derivative — just a highly developed word prediction neural network, and for sure they will always lack a soul.

Human beings are fallible, human beings are flawed, human beings have foibles, but human beings are each unique, one off, intrinsically precious and with a dignity imbued by their nature created in Imago Dei. They are not ghosts in a machine and cannot be supplanted by a machine in any way that is an improvement.

“Everyone is their own universe—a life, a dream, a hope, a sorrow, a joy, a surprise, a revelation, a story with a beginning, a middle and an end—even when they simply walk by you on the street.” Harlan Coben, “Home”

[i] AI Will Save the World, Free Press, Marc Andreessen, July 11, 2023. AI as redemption, the ultimate progressive optimism.

[ii] https://mary-shelley.fandom.com/wiki/The_Golem

[iii] The full famous quote from British historian, Lord Acton: “Power tends to corrupt and absolute power corrupts absolutely. Great men are almost always bad men, even when they exercise influence and not authority; still more when you superadd the tendency of the certainty of corruption by authority.”

[iv] Stop Dave, My mind is going.

[v] Rage Against the Machine, Paul Kingsnorth, July 12, 2023. What would a refusal to worship look like? A vio lovesion of resistance.

[vi] White House demands AI safeguards.

[vii] GPT3 reasons as well as college students

[viii] Link to the essay written by GPT 3.5

[ix] https://decrypt.co/149826/openai-quietly-shutters-its-ai-detection-tool

[x] Job seekers using ChatGPT to write resumes and nabbing jobs

[xi] https://fortune.com/2023/03/29/elon-musk-apple-steve-wozniak-over-1100-sign-open-letter-6-month-ban-creating-powerful-ai/

[xii] Elon-musks-neuralink-wants-to-put-chips-in-our-brains

[xiii] OpenAI ChatGPT nears bankruptcy.

[xiv] Several links to learn some more: (Others relevant to the topic were in the previous post, part one.)

 Why this AI moment might be the real deal     New Atlantis

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?

Instagram AI bot talks to kids about gender identity and encourages transitioning.

Australian supermarket menu and recipe planner suggests meals that are poisonous.

AI will force 40% of workers to reskill

Marc Andreessen is (Mostly) Wrong This Time    Wired Magazine

How Musk, Thiel, Zuckerberg, and Andreessen—Four Billionaire Techno-Oligarchs—Are Creating an Alternate, Autocratic Reality    Vanity Fair

2 Comments

Filed under Background Perspective

2 responses to “Golem, Gollum, HAL, LLMs, and Kurzweil (Continued)

  1. My friend, Bob Cormack, from Grand Junction, emailed me a long comment well worth anyone’s time (as Bob’s always are).

    Thoughtful and knowledgeable, he worked with neural networks many years ago. A well recognized optical engineer in Colorado, inventor with multiple patents, a pilot, and experienced mountaineer who once summited Everest, Bob certainly knows a lot more than I ever will about many things.

    I always look forward to hearing from him. We climbed trees together for EZ Tree Service in Boulder about a century ago. He and his wife, Cathy, are great folks and were superb hosts to Rita and me when we visited a few years back. Long conversations into the night.

    He gave me permission to post this as he is travelling on vacation with his family.
    *****************************************************
    Hi Jack,

    I’ve been reading your latest posts (about AI) and reading the links, (the ones that don’t go to a pay wall
    anyway), and comparing them to my (strictly engineering) use of neural nets (NNs) in the past, and I
    have come to come conclusions about neural nets and current AI that make sense to me:

    1) In my previous use of NNs, I considered them as easy ways to generate complex algorithms
    to analyze and decode data: Deciding what the size spectrum and number of water droplets
    passing through a sample tube (on the wing of an airplane flying through a cloud) was,
    based on the way they scattered a laser beam; Decoding a ‘wavefront coded’ image by
    recreating the diffraction-limited image while ignoring any noise created by the electronics.
    The results were generally much better than from explicit algorithms, which could take
    weeks to produce and test. The fact that you couldn’t figure out how the NN actually did
    this was irritating, but didn’t seem overall important, given that you could add stuff, like
    ignoring noise (so I thought), that would be incredibly hard to define well enough to include
    in a designed algorithm. In effect I was using them for very narrowly defined “idiot
    savants”.
    Looking back, I realize that I should have tested these NNs by feeding them pure noise. (I
    didn’t think of doing that anymore that I would have tested one of my own algorithms that
    way.) You would expect an explicit algorithm to output (possibly modified) noise, when
    given noise in. But, some of my experience (and your links) have led me to think that a
    normal way for a NN to react is to continue to output ‘reasonable’ data that it ‘thinks’ it sees
    in the noise (hallucinating?).
    To that end, what would ChatGPT do if you prompted it with a nonsense question (or just a
    string of random words)?

    2) I also realized that these LLMs (“Large Language Models”, (like ChatGPT) are trained on
    predicting what words are most likely to be used in a sentence, after being given the first
    part of the sentence. This is not “thinking” in the normal meaning of the word. It is more
    like what we assume parrots do.
    Also, thinking in words is somewhat limiting. (Nearly all of the inventions I have come up
    with have first been realized by thinking in ‘concepts’, which I later put into words. (Kind of
    like when you want to write a program to do something, you first decide what it is you want
    it to do; then you start considering what explicit computer commands will produce that
    result – you don’t start by ‘thinking’ in Fortran, C, or Python.
    The much smaller “ChomskyBot” (which has been on the net for decades) simply produces
    random strings of words which, however, are used in the likely sequences found in Noam
    Chomsky’s writings. Each time you invoke it, it produces a paragraph which sounds like
    something Chomsky might write, but usually means nothing. (Critics might claim that this is
    accurate Chomsky!)

    3) Also, there is the evidence (in some of your links) that these AIs are not ‘thinking’ in any way
    meaningful: For example, the way the Australian “recipe bot” suggests ‘recipes’ which
    involve rat poison, produce chlorine gas, or otherwise are not something that anyone with
    any common sense would try.
    The obvious conclusion is, therefore, that LLMs, while being able to appear to speak
    intelligently, actually have no ‘common sense’. (Actually, some people also seem to be
    deficient in this attribute – perhaps ‘common sense’ is generated by interaction with the
    physical world?)

    4) So, many of the uses that people are anticipating that AIs can be used for will require at
    least some ‘common sense’ or even ‘wisdom’ for some:
    a. Medical and financial advice
    b. Flying airplanes
    c. Even driving can require sense – dark nights on unpainted or snow-covered roads,
    heavy rain, wind blown dust/smoke/snow, etc. (I would like to see an AI try to drive
    some of the jeep roads Cathy and I have – in someone else’s jeep, however!)
    (HAL’s insanity in ‘2001, a Space Odyssey’ is kind of a premonition of this. “Common
    sense” should have informed HAL that killing off the astronauts wouldn’t be considered
    a good thing.)

    It seems to me that a lot of potential uses are going to require human oversight until the AI
    developers learn how to install ‘common sense’ in their creations. (Maybe it can be done with
    human-developed algorithms ‘watching’ what the AI does?)

    This development may need to wait until it is possible to release AIs which can learn from
    experience. (Currently, they are all trained with massive computing power – only the final
    product can be run on normal computing hardware, and they can’t learn anything new.

    There is still a lot that AIs can usefully do, without requiring a developmental breakthrough,
    such as; making Internet search engines much better, operating as a store of information for
    human users, etc. – As long as people understand their limitations… which won’t always be the
    case.

    I think the people (in your links) who refer to LLMs as “parrots” have a point: Imagine that you
    could cram all the information in ChatGPT into a gray parrot’s head – would you expect the
    parrot to have any common sense? (Or the same perhaps for a 3 year old child?)

    Liked by 1 person

  2. Thomas Silveria's avatar Thomas Silveria

    Nice! Thanks for AI essay.

    Like

Leave a reply to jparquette Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.