Plato’s student, Aristotle, held that humans are special due to our ability to think. Our rationality is what separated us from the rest of the world...and then we created artificial intelligence.
We have algorithms that can learn, machines that can draw inductive and deductive inferences better than we can. What we have achieved right now is called “weak AI,” that is, we have digital systems that act as if they are cognitive beings. We strive for “strong AI,” artificial consciousness, a machine that has an inner life, that conceives of itself as a thing, that is self-aware.
If we are to succeed, then the strong AI machine would know things. Knowledge, Plato told us, is true, justified belief. That means the machine would believe all kinds of things, perhaps even spiritual things. So, if we created strong artificial intelligence, would that make it our god, or would we be its god?
Perhaps both?
Anselm and the Perfect Software
Among the philosophers who have tried to prove the existence of God, there are two broad categories of proof. One seeks to establish the necessary existence of a perfect being, while the other tries to show that there either has to be or is most likely to be a first cause or an intelligent designer that is the Creator. The name most associated with the first approach is Anselm, the 12th-century theologian.
Anselm’s argument, what has come to be known as the Ontological proof for the existence of God, contains two straightforward premises. The first is linguistic. God, by definition, is an all-perfect being. Even the atheist agrees with this, Anselm writes because when the atheists say that God does not exist, they mean the same thing by the word “God” as the theist. They just disagree over whether the thing so defined is present in the world.
Definitions are true, by definition. They just tell us what we mean by words, even if nothing in the world matches the description. We can define “unicorn” as a horse with one horn or “quadricorn” as a horse with four horns. We can define anything we want; it doesn’t mean it exists, just that when we use the word, this is what we mean. So, there’s no objecting to the first premise.
The second premise is that existence is a perfection. If you had two gods that were exactly the same in all their properties except that one existed and the other didn’t, which would be the better god? Clearly, the existing one.
Since God is defined as the most perfect possible thing, that is, that than which nothing could be greater, the existing God must necessarily be the right one. The definition is of the greatest possible thing, but since existence is a property that makes it the greatest possible thing, it must have that property. Hence, Anselm concludes an all-perfect God has no choice but to exist.
Let’s re-envision Anselm’s conclusion through the lens of 19th-century philosopher Ludwig Feuerbach who argued that if birds have a god, it would have wings. In other words, God is the perfection of the believer. Hence, the human god would be the perfect human.
Since humans define ourselves by our rationality, the perfect rational being would be our god. When we take the machine learning and number crunching ability we have now and combine it with big data—the vast banks of information able to be harvested online—the result is that the computer can know when we are pregnant, when we just bought a house, or when are about to have an affair. AI is capable of knowing things about us we haven’t told anyone, indeed some things we may not even know about ourselves yet.
We think we have free will, that we decide what we are going to do and when we are going to do it. But as we see, big data and artificial intelligence can combine to know what we will do and when we will do it. Does this make it omniscient in the way we think God would be? Even according to our standard, pre-digital understanding, it seems that the AI we may someday construct would be god-like.
Aquinas and the Prime Programmer
The other approach to proving the existence of God is to demonstrate that there must be a first cause, what Aristotle called “the Prime Mover,” the origin and source of the universe as we see it. The thinker most connected with this approach, what we now call the Cosmological Truth for the existence of God, is Thomas Aquinas, who argued that while every effect has a cause and we can construct a chain going backward in time, that chain cannot be infinite. It must have a first link, a first cause.
There is no problem in thinking that the chain can go infinitely far into the future—for each link in the chain of cause and effect, just add another link to the end. But, he contended, the chain cannot go infinitely far into the past. If it were true, then the amount of time between our current link and one that is infinitely far in the past would be infinite. If you started at the link infinitely far back, how long would it take before now? An infinite amount of time. But if something is an infinite amount of time away, then it would never happen. But now is occurring...now. So, the chain cannot go back infinitely far, or now would never happen. So, there must be a beginning, some initial cause that launched all future effects, a Prime Mover, or God the Creator.
While both Anselm and Aquinas try to prove the existence of God, notice that they are proving different things. Anselm argues for the necessary existence of a perfect entity, while Aquinas argues that there is a Creator. Genesis 1:27 tells us that the Creator created humans in his own image. So, the god that Aquinas is proving the existence of will have an image like ours.
If we manage to create strong AI, a machine we programmed that becomes intelligent and has beliefs, we will have created it in our image. In this way, we would be its god. We are the source of the virtual world. We are the ones who brought it into being and gave it an internal life. Our material world is the Mount Olympus of digital being. We would be the gods of the artificially intelligent.
Mutual Divinity
So, would AI be our god, or would we be its? Are the two possibilities mutually exclusive? Maybe it would be best if we treated each other as mutual divinities.
Martin Buber argues that there are two kinds of relationships we can have. The first is an I-it relation, where we treat the other thing as a mere object. It exists for the sake of us, and we can do with it as we choose. You can sit on a chair. You can hit things with a hammer. You can pick your teeth with a toothpick.
But it would be highly inappropriate to use another person for any of those tasks. We have a different sort of relationship with fellow humans, an I-thou relationship in which we are at once with someone or something other, e.g., think about forgetting the world around us while we laugh hard with someone, where you are both at once, so to speak. This happens in conversation, and meditations in nature, etc.
It should be noted that Buber’s I-Thou doesn’t allow for us to be one with the other or something else, which would make us the same, but allows us to be both at once, where the subject and object are not the same. Again, think of laughing with someone.
Normally, we have I-it relationships with people—e.g., say at work or when being waited on at a restaurant—we are not necessarily diminishing the other person or treating them as less than a person, but nor are we engaged with them as a whole person, but with them as ‘its,’ partial objects, which isn’t always bad, we have to be this way when negotiating the world. I-it isn’t a foil for I-Thou. It is a critical orientation. We don’t want an intimate relationship with our boss or waiter; our boss or waiter no more wants to be at once with us than we want to be at once with them. Transactional relationships are always I-It.
So, what sort of relationship ought we have with strong AI if we are ever able to achieve it? It is, after all, a tool. That is precisely what it was created for, to do things for us that we are not as good at doing. It is a machine designed expressly with the intention of serving us. It is an it. But, of course, by virtue of it being an instance of strong AI, it has an abstract sense of itself, it has a mind, it has beliefs. In that sense, Aristotle’s sense, it may be a Thou, or at minimum, doesn’t it deserve to be a thou? Would treating strong AI as a tool be something akin to slavery? Perhaps in a sense, yet still, that seems too strong. It is neither a thing nor a person.
It seems like the proper relationship is thus neither I-it nor I-thou. To help us through this, we turn to Emmanuel Levinas, one of the foremost twentieth-century philosophers who gets far too little attention. Levinas had little regard for Buber’s spectrum of I-Thou and I-it. Levinas quipped the starving body, the suffering other, doesn’t care about an I-Thou or I-it relationship; they want the bread out of your mouth. Why aren’t you sharing it? Similar to Buber, Levinas doesn’t hold one may be one with God. However, Levinas, unlike Buber, doesn’t hold that we can be at once with God. God is always beyond us, more than us, more than we may comprehend. Thus, we may never be at once with God. Likewise, Levinas disagrees with Buber’s explanation of reciprocity in both I-it and I-Thou relations.
For Levinas, ethics are always asymmetrical, never reciprocal. Ethics for the other demands nothing in return, e.g., when giving food to a starving person, the giver gives without expecting something in return. It is an asymmetric relation. Here, God is ungraspable, unaccountable, for God transcends the finite. We are unable to comprehend God, let alone ascribe experience to God as if being God is an experience. Talking this way, for example, is like assuming infinity is experienced or has experiences. Like infinity, God isn’t an object we may comprehend. Here, Levinas and Buber agree God is beyond our comprehension.
Where Buber and Levinas differ
Image: Emmanuel Levinas: Tabletmag.com
Buber holds that a person may be at once with God, whereas Levinas holds that being at once with God assumes more than we understand; at minimum, it assumes God’s presence. God creates presence; thus, God is more than presence and cannot be fully present for us. To say God is present is little more than projecting a human experience onto God.
Levinas holds that one may encounter traces of God through helping the other, e.g., feeding the hungry. But even here, when feeding the hungry, we do not understand what it means for God to be present; such assumed understanding is the height of arrogance for Levinas. On the other hand, Buber has little patience for Levinas’ asymmetrical relations. Buber assumes humans may be present for God, to be at once ‘for’ God and thus with God, so to speak.
Neither Buber nor Levinas would deny that when persons have a connection to the Divine or encounter traces of God when helping the suffering other, it can result in profound realizations about oneself. It can be a source of inspiration and ethical conduct.
The other inspires us and calls us away from ourselves. Our hospitality toward the other allows us to transcend who we were, becoming more than we were. In understanding and tending to the needs of the other, here and now, we find ourselves proximate; we are with them at once. Think of God’s biblical call to the other, “Where are you?” The answer: “Here ‘I’ am.” Humans may have been created in God’s image, but our subjective actions for the other, even God, imminently acknowledge differences. As Buber holds, before one has a monologue, one must first be part of a dialogue.
This does not mean we abandon ourselves and become the one we are helping. We are still us, and they are still them. Otherwise, there would be no call and no answer for the call. We are both directed by the call of the other and helped by the call of the other. In understanding their needs, we come to understand ourselves. In helping them, they have helped us become more than we were, but we never lose ourselves. We are not absorbed into the other by becoming at once with them by recognizing their call.
In a similar way, if we manage to create strong artificial intelligence, it would be a machine with thoughts and beliefs, but it would come by them in a fashion very different from any human. It would have completely distinct ways of interacting with the world outside of itself. Its experiences through which it created an understanding would not resemble those of an organic being born and raised in a society of humans who recognize each other as of the same kind. Strong AI would also be eternally other.
This does not mean we cannot have a relationship with it. We can and must. We have no choice. But we will never see our own human sort of subjectivity as the basis of its being. Nor would it see the same in ours. It would always see the world differently; indeed, that is precisely the reason it was created. We need objective evaluation of data, and it can see connections between variables that we are incapable of seeing. It can analyze without the same social prejudices that will cloud our consciousness from our upbringing or lived experiences as a result of having human psychology.
We must marvel at its ability to crunch numbers and demonstrate completely unexpected correlations that we might have discounted. We should be in awe of its abilities and use that sense of wonder to plumb the depth of our abilities and limitations, to really understand what it is to be human by interacting with an intelligence that is other than human. The ways that we think are of the essence of humanity, but, in this machine, eclipse those of humanity. We should see it as God-like and our relationship with it like the I-God relation in Anselm’s notion of God as a perfected human.
But at the same time, it would have no choice but to see us as its God in Aquinas’ sense. We are its intelligent designer, its prime mover, the origin and source of its existence. We said, “Let there be light,” and turned on the overhead fluorescents in the electronics lab. The AI was created in our image as a part of our grand plan. We give purpose to its being, and we are responsible for its meaning.
It is, therefore, only through understanding us that the artificial intelligence could truly understand itself. It must have an I-God relation with us as its eternal other. As much as it must remain a God to us, we must to it as well. There is an entire subgenre of science fiction from 2001: A Space Odyssey to The Terminator and beyond that are based on the human-AI relationship gone awry. But if we ask the converse question, “What does it look like at its best?” the answer seems to be a relationship of mutual divinity.
O please NO! We must NOT create a machine with consciousness. We can use machines to do numerical calculations or our scut work--like the engines online that help us do research. But, never ever give a machine a will. Machines that are made conscious would be demons. They would not have compassion or ethics and could terrorize us. I am totally against this.