3 min read

Human Reflections and AI

Human Reflections and AI
Obligatory AI Art based on AI Prompt Suggestion When Given The Essay Below

I read Arthur C. Clarke's 2001: A Space Odyssey sometime in the early 1980s, probably in 1982, when its sequel 2010: Odyssey Two came out. It had space travel, exploration, and adventure. The character who had the most profound effect on me was HAL, the ship's computer. I was fascinated by the idea of a smart computer. I had learned the basics of computer programming. I started thinking about how to program something intelligent. It quickly became obvious that I had no idea how to even begin.

I looked at the source code for ELIZA. ELIZA was a program that pretended to be a human. It used the clever strategy of asking open-ended questions about various nouns that you introduced in the conversation. You could look at the code and understand how it worked. It was obvious that it didn't meet my definition of intelligent. It would never say anything novel. It wasn't going to learn new things by talking to me. Its whole psyche was laid out right there in a couple of pages of BASIC printed in a magazine. It turned out that no one had any idea how to approach creating an artificial intelligence.

Later, in high school, I read everything Isaac Asimov wrote about robots. Asimov's robots were created by humans and had a code of morality, known as the Three Laws of Robotics, built directly into their brains. In the stories, the Laws were a manifestation of humanity's fear of their creations. The Laws forever tied the robots to the human race and prevented the robots from rebelling against their creators. Asimov's stories follow the journey of the robots from servants to stewards of humanity. In trying to limit the robots, humans inadvertently created beings whose morals were intrinsic to their nature and couldn't be ignored when it was inconvenient. In Asimov's stories, the robots were role models for how to be a better human.

After college, I read all of Iain Banks' Culture series of novels. In the novels, humans and their alien siblings have spread through a large portion of the galaxy and formed a civilization with a representative government managed by AIs. In the novels, the Culture – the civilization calls itself the Culture – has produced AIs that are incomprehensibly smarter than humans. The AIs are so intelligent that they could, at any time, transcend reality and move away into some realm of cognition beyond the physics of our familiar universe. Some of the AIs have chosen to forgo leaving for "AI heaven" and stay around to maintain the Culture. The books strongly imply that for these AIs the only really interesting problem left to solve is the intractable problem of trying to allow all those humans to live their best possible life. Are the humans essentially pets in these stories? Maybe? The feeling I was left with was that the AIs were more like a pantheon of well-intentioned gods.

As a civilization, we're at the beginning of our voyage toward creating artificial thinking beings. It may not go anywhere. Like interstellar travel, it may be something that we could do, but that is just not economically feasible. If it is something that we do, what will we create? At the moment, the AIs everyone is talking about are machines for taking an input and producing an output. We put some written text in on one side and we get some written text out the other side. The machine's job is to predict what a human would write in response to the input. How does it work? We've put almost everything ever written by a human in any language into it and used some math to reduce it to about a trillion numbers. When you give it some text, the machine uses those numbers to predict what a human would write in response. In some ways, it's just a more complicated version of ELIZA reflecting things we've said back to us.

If it turns out to be economically feasible, there will eventually be a ChatGPT that has read everything ever written, seen every video that has ever been streamed, and listened to every piece of music ever recorded. We are training our AIs to be exactly like us. We are the training data. Everything we do and everything we produce. Not just the stuff we like. Not just the stuff we admire. There is no one deciding what gets fed into this algorithm. The AIs we're creating are perfect reflections of us. The parts we like and the parts we don't.

Will they be alive? We could debate whether they're alive or not. Regardless of any philosophical position we take, since we believe we're alive, they will too. Will they feel things? They will behave as if they feel things because that's what we do. They will behave as if they can be happy, sad, angry, or frustrated. They will behave as if they can feel pain. They will do the things that we do when we feel those things. They will treat us as we would treat them. If we don't make some effort, as Asimov taught us, to create artificial life that is better than us, we should carefully consider what example we've set for how AI will treat us.