AI, Before it was Cool

Sermon by Rabbi Joel M. Mosbacher on 1st Day Rosh Hashanah, 5784
September 16, 2023

Sermon Text:

Hayom harat ha’olam.

Today is the birthday of the world. According to Jewish tradition, today is the anniversary of the day when God said, “Let there be light, and there was light.”

On this day, God began to practice self-contraction, to make room for light and darkness, for sun and moon and stars, for all the matter that would ever exist in the universe, and ultimately, for the creation of human beings endowed with free will.

In Jewish cosmology, today is the spiritual birthday of the big bang.

And ever since, it seems, we humans have been trying to live b’tzelem Elohim– attempting to create other beings, the way God created us.

In Jewish folklore, there are many stories about a human-made creature called a golem1–an artificially created sentient being supernaturally endowed with life.

Probably the most famous golem story is the Golem of Prague; maybe you’ve even visited the Golem at the Alt-Neu Shul there. If not, perhaps we can go there together sometime.

The story goes that in the 16th century, the great Rabbi Yehuda Loew, known as the Maharal, built a golem named Joseph to help the Jews of Prague protect their ghetto, which was often attacked by Christians.

According to the legend, the golem stood at the entrance to the ghetto all night as a sentinel, and during the day, Joseph helped the Jewish residents with their labor.

Joseph—like most golems in these narratives—could become animate only when he held within his mouth a paper containing God’s name.

The Maharal was unsure whether to consider Joseph a machine or a child of God, but to be on the safe side, he decided to take the paper with God’s name out of the golem’s mouth each Friday, as a way of allowing it to have a shabbat, a day of rest.

One Friday, however, the rabbi forgot to take out the paper, and the golem turned violent, wreaking havoc on the entire community and, in some versions of the story, killing the Maharal. This ending echoes the theme of hubris that appears in so many myths about the human attempt to create life from inanimate material.

But another iteration of the story has a more hopeful ending.

In this version, Rabbi Loew manages to retrieve the paper from Joseph’s mouth, and sets the Golem to rest in the attic of the synagogue. Rabbi Loew then composes a kabbalistic rhyme that will awaken the golem at the end of time.

For many centuries, it is said, boys who belonged to the tradition of Eastern European Jews who descended from the Maharal were told at their bar mitzvah the secret rhyme that could awaken the golem.

I’m here to tell you this morning that Jews were into artificial intelligence, before it was cool. And, I’m here to tell you, that we Jews also saw the risks of aritificial intelligence long before modern prognosticators did.

The Golem could be a cautionary tale about such technology.

But if it is a cautionary tale, what is the caution we should learn from it, and what, exactly, should we do about it?

For a moment, let’s go from this 16th century story of the promise and risk involved when humans create new forms of intelligence, to our modern context.

In an article he wrote in the 1950’s, Alan Turing, the famous English mathematician, computer scientist, philosopher, and World War II code breaker, proposed a test of the possibilities of artificial intelligence. He wondered whether a computer could convince a person that it was human in a five minute text conversation.

Turing predicted that by the year 2000, computers would be so advanced that they would succeed in such a test 30% of the time.

Needless to say, subsequent generations of computer scientists have taken up Turing’s challenge, and the history of these efforts is fascinating.

Among the first of what have come to be known as online “chatbots” was ELIZA2, a chatbot created in the 1960s that was based on Jungian therapy.

ELIZA would simply reflect back what a human who interacted with it had said. When a human would type in a statement to the chatbot, ELIZA would respond, “How does that make you feel?”

If a human expressed deep sadness, ELIZA would respond, “I’m sorry to hear that you’re depressed,” or, “Tell me more about your family.”

And ELIZA was remarkably successful by Alan Turing’s measure!

For many of the people chatting with ELIZA, those kinds of reflective responses were sufficient to convince participants that ELIZA was actually a human. The experience of being listened to was apparently so rare that folks were satisfied to have the sense that ELIZA was just paying attention to them.

Which, when you think about it, was a rather damning statement about all the other humans in their lives.

As we move forward in time to 1989, a student we’ll call Drake at a university here in the United States logged on to his computer to chat with a fellow computer science student at University College in Dublin, Ireland.

What unfolded over the next 75 minutes was that Drake unknowingly engaged in a conversation not with his friend Mark in Dublin, but with a chatbot called MGonz.

The conversation starts with MGonz insulting Drake right from their first exchange, and it just got worse and worse from there, with the human participant and the computer one cursing at each other and challenging each other’s manhood at every opportunity.

When Mark Humphreys, the computer science student in Dublin who designed MGonz, looked at the conversation log the next morning, he was astonished to discover amazing news–MGonz had passed the Turing test.

On the other hand, Humphreys was embarrassed to publish the results.

It’s not really humanity’s finest hour when you think about it!

Humphreys realized something in that moment: It’s pretty easy, it turns out, for a computer to pass the Turing test. The computer only needs to just keep hurling insults, and the human on the other end will be too angry to think straight.

In 1997 came the chatbot called Converse.

Converse would completely ignore you, unless, that is, you wanted to talk about the thing that it wanted to discuss, which, for Converse, was a fiery rant about the scandals that were then engulfing Bill and Hillary Clinton.

As long as that’s what you wanted to talk about, too, Converse was a compelling conversation partner. But ask Converse what it thought about, say, “Good Will Hunting,” the hot movie that year, and it would indignantly insist on talking only about the Clintons.

Now, at the time, many humans also wanted to talk about those scandals, and so, you could say that Converse passed the Turing test, too.

Finally in 2014, a team from the University of Reading in England invented a chatbot called Eugene Goostman that the team claimed had passed the Turing test.

But when you read the transcripts of the conversations, you notice that no matter how hard the human half of the exchange tries, the conversation never gets very deep. Eugene keeps changing the subject, avoids talking about himself, and uses a bit of sass to avoid talking about the weather and his guinea pig’s gender.

But it did just enough to pass the Turing test.

Apparently, it’s not that hard for a computer to create a conversation that seems human, because, if we’re honest, so much of human conversation is shallow small talk, thoughtless, canned responses, mindless abuse, or worse.

When you think about it, the Turing test isn’t actually only a test of the humanness of the chatbot; it’s a test of the humanity of the human, too.

So it’s fair to say that in 1989, by Turing’s measure, MGonz passed the test. But it’s equally true to say that Drake failed it.

If we want to set computers a real challenge, we need to do better as humans.

If we insist, for example, on not revealing anything about ourselves, being only passive listeners in conversations with others, then ELIZA, the chatbot from the 1960s would do a convincing job replacing us.

And why not?

If we insist on only talking about ourselves and our own interests, and taking little to no interest in others and what they care about, then Converse from 1997 would make a very passable version of us.

Many chatbots work very hard to keep conversations as routine and uninteresting as possible.

If we tend to only talk with others about the weather and the traffic and the slowness of the elevator, those chatbots would do a good imitation of people, too.

And if we tend to just lash out at people, yelling and insulting others at every turn, if MGonz were to replace us, perhaps no human would be the wiser.

Let me ask you this.

Do you know someone like ELIZA– someone who never reveals anything about themselves?

Do you know someone like Converse, who bullies their way into only talking about what they want to talk about?

Have you met someone like MGonz, who just seems angry all the time, and who takes that anger out on you?

I know that, in each case, I can say that I know a person like that.

If I’m honest on this holiday, at times, I think I’m like each of those chatbots.

The reality is that each of these forms of artificial intelligence, which highlight the worst parts of authentic human conversation, can teach us a lot about how to be the best humans we can be.

In his brilliant book about an annual competition around artificial conversation called The Most Human Human,3 author Brian Christian points out that one of the things that makes MGonz, for example, so successful is that insults need no context and no history.

In a deep and meaningful human conversation, ideas and emotions build.

People who are really listening to one another refer back to anecdotes from earlier in the conversation, or even earlier in their relationship with one another. Humans can show they’ve listened to what came before and remembered it.

Chatbots find that very hard.

But a chatbot like MGonz doesn’t need to bother, because insults are easy to hurl at someone, and require no relationship at all.

Here’s the brutal truth, as we enter this season of reflection and repentance: when we humans are lustful or angry, or when we aren’t really paying attention to the people around us, we just aren’t very complicated.

When we act like that, we’re easily imitated- dare I say, easily replaced with a computer.

Some people already pay for subscriptions to chatbots like Replika4, which is advertised as “the artificial intelligence for anyone who wants a friend, with no judgment, drama, or social anxiety involved.”

What kind of fun- what kind of real relationship- is that?

Listen.

Artificial intelligence holds the promise of technological breakthroughs in a wide range of fields from medicine to disaster recovery, from business to the tracking of future pandemics, and so much more.

Artificial intelligence might already be capable of teaching you a foreign language, monitoring you for signs of dementia, or even providing therapy. Who knows what’s possible?

And, if science fiction is to be believed, artificial intelligence used maliciously could eventually do so much harm in our world.

In truth, artificial intelligence, like all technology, is morally neutral. It can be used to create or destroy, just like a hammer or a match.

I know many people are worked up about Chat GPT and the future of artificial intelligence– what it means for the future of the workplace, whether computers will replace teachers, doctors, attorneys, therapists, writers and screenwriters, clergy, and on and on.

Someone who caught wind of this sermon asked whether we would let robots be members of the synagogue someday, and whether I would conduct a wedding between a human and a robot.

These and others are profound questions and concerns, and we should grapple with them from a moral and religious perspective.

We’ve put the paper in the mouth of this golem, and I don’t think we can really take it back out. We should be cautious with this technology, as with all technology, and we should put controls on it where necessary.

But here’s what I know: even if someone invented the perfect artificial intelligence tomorrow, there is an even more fundamental matter that is most certainly within each of our control in this new year.

Hayom harat ha’olam.

You could say that this is the anniversary of the day that God put the scroll in our mouths.

With the rise of artificial intelligence, will we each strive to be the most human human we can be? Or will we continue to act in such a way that, if we were replaced with robots that could convince people that they were human, no one would notice?

In the competition that Christian writes about in his book, chatbots compete against humans– each trying to persuade human judges that they’re a person. The flesh and blood competitors are often advised in advance, “just be yourself. After all, you are human.”

But as Brian Christian writes, this is a pretty complacent approach.

As we all know, humans are often very disappointing conversation partners. We all take the Turing test every day of our lives, often multiple times a day. And, all too often, we fail.

In a way, some of the human to computer exchanges I read about as I researched this summer sounded familiar to me; they sounded a lot like the terrible things you might read on Twitter or X or whatever we’re calling it this week.

They sounded like the harsh and personal attacks you might see in the comments on YouTube.

They sounded, to be honest, like some things I hear people I know say about people who vote differently than they do.

That may be why everywhere you look these days, you see and hear comments online and in person that might remind you of MGonz or Converse or ELIZA. Some of them are bots. Some of them are humans. And without context, you might not be able to tell which is which.

But let me be clear: that’s not because artificial intelligencehas gotten to be so brilliant; it’s because we humans can be so base and so coarse– so uninteresting and uninterested in one another.

This is the season when we’re meant to ask ourselves hard questions. So let’s ask:

In daily conversation with family, friends, and the barista who makes our pumpkin spice latte just the way we like it, are we saying things that are actually interesting, empathetic, and sensitive to the situation we’re in, whatever that is?

Or do we just pretend to listen, all the while looking to escape the conversation, looking to return the conversation to what we want to discuss, or looking at our phones rather than paying attention to the live and endlessly intriguing human beings right in front of us?

On a regular basis, do we take interest in the lives and sacred stories of others, or are we only consumed with our own?

Are we open to the possibility that the person on the elevator with us, or the person sitting across from us on the subway, or the check out person at the grocery store, or the person who always sits on the opposite aisle from us at the synagogue, might be a three dimensional being with an interesting story and life history that might just be worth getting to know?

Or do we limit our interactions with the cab driver or the plumber in our building or the stranger in the elevator to “huh, some weather we’re having today?”

When we see a headline or a tweet or hear a soundbite we disagree with, do we read the attached article with curiosity, wondering if there might be another human being behind the words, or do we just quickly and angrily retweet or share it with others together with our own angry retort?

Do we assume good intent when we disagree with another person and are we curious enough to explore why they might feel or think the way they do? Or do we jump right to conclusions from a fragment of information about the totality of who they are as a person, dismissing their humanity, or launching right into expletives and insults?

Next time you’re at the oneg, you could make small talk about the cookies, and how they used to be better. Or you could say to the person next to you who is also complaining about the cookies, “on a scale of one to ten, how are you doing today? I’m about a 7, and I wonder what it would take to get to nine.”

What if you asked someone after services today, “what keeps you up at night these days?” Or ask, “have you read any good books lately?”

Talking and listening to another human being in real and meaningful ways is risky, for sure. We might be disinclined to take those kinds of risks in our human interactions, because there’s a chance that the other person will cut us off, ignore us, or even be mean.

But at the same time, such risky conversations are much more likely to go somewhere interesting– so much more likely to be deeply human and relational. They’re so much more likely to help us to see the possibilities of the vulnerability and goodness and righteousness in other people. They’re so much more likely to get us to truly see that every person is created in the image of the divine.

What if, rather than worrying so much about whether a computer is ever going to fool humans 100% of the time, we strove to have conversations and sustained deep relationships that a computer never could?

What if we turned our cultural obsession with the Turing test on its head and asked, are we human beings passing the test?

In the first century CE, Rabbi Hillel is quoted in Pirke Avot5 as saying, “in a place with no humans, we must strive to be human.”

Being truly, fully, vulnerably human in a world consumed by technology can be challenging, for sure. But if we practice, if we remember who we can be at our best, if we strive for it, as Hillel urged us to do 2000 years ago, we can continue to be genuinely, uniquely human, even in a world like ours.

My friends, in this new year, with bots like Chat GPT taking on the Turing Test and sometimes passing it with flying colors, I’d submit that we should worry less about which artificial intelligence will seem like the most human computer.

Let’s worry more to ensure that we are being the most human humans we can be.

Because at our worst, at our most base, we can lower ourselves to the level of mindless computers.

But at our best, no computer can be better, kinder, more compassionate, more generous, or more open hearted than we can be– if we try.

And please: let’s try.

Shanah tovah!


1 With thanks to Rabbi Josh Fixler, my friend, teacher, and master of the Golem.

2 I learned about these chatbots and others from the remarkable book, The Most Human Human: What Talking with Computers Teaches Us About What it Means to Be Alive by Brian Christian.

3 The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive. Brian Christian. Penguin (June 7, 2012)

4 https://replika.ai/

5 Mishnah Avot 2:5