What is consciousness? Is AI conscious and therefore morally responsible? Consciousness is definitely a contentious topic, but I attempt to resconstruct the arguments of two philosophers to define AI’s consciousness.

When the Vulcan planet was entirely demolished by the Klingons, the universe had a collective moment of grief.

Forgive me for being so crass, but why? Why do we care at all?

In the (unfortunately fictional) universe of Star Trek, it seems evident that killing a Vulcan is reprehensible. Morally wrong, of course. And yet, in all renditions of Spock Prime or the Vulcan people, they are shown to be emotionless and unfeeling of pleasure or pain. In fact, in many reruns, Vulcans are treated like walking computers—who calculate and calculate outcomes out with no real emotional investment. And yet, it is still wrong for Vulcans to die, even if they do not experience pleasure or pain. This is because that definition of consciousness is flawed, and by using a different definition of consciousness, we can include more beings under the umbrella of consciousness.

What is consciousness? The International Dictionary of Psychology circularly describes consciousness as “the having of perceptions, thoughts, and feelings… the term is impossible to define except in terms that are unintelligible”—essentially saying nothing (Sutherland, 1996). Consciousness is a contentious subject in philosophy and there is no single definition of what it means to be conscious, making it a central issue in the theory of mind.

Here, I will introduce two thinkers: Douglas Hofstadter and a student of his, David Chalmers, and their defintions of consciousness. (Disclaimer: I am unable to read the entirety of Chalmers’ and Hofstadters’ works and books, which means my extrapolations are merely from excerpts and are not representative of their entire beliefs.)

Chalmers

David Chalmers, in his book The Conscious Mind, writes that experience is central to consciousness. When we are conscious we experience colors, music, pleasure, and pain—all of which he says have a “distinct experienced quality” (Chalmers, 1996, p. 5-6). A being is conscious if there is a “what it is like” feel; as in, it has a qualitative feel (Nagel, 1974). This qualitative state is a state that has or involves some sort of experiential property, which philosophers have labeled “qualia.”

Qualia is an introspective experience, a sensation that incorporates mental states into physical sensations. However, of course, the specifics of qualia are still very controversial and in no way an exact quantitative measure of consciousness (Tye, 2017). Unfortunately, it is near impossible to find a quantitative measure. Chalmers notes that for all the progress that science has made in understanding the human experience and body, it has managed to say nothing about consciousness.

Chalmers focuses mainly on the human experience by presenting a small list of conscious experiences, such as a visual experience of Christmas lights or experiencing the pain of a thumbtack prick (Chalmers, 1996, p. 7-9). In addition, he separates consciousness from the psychological mind: the phenomenal and the psychological concept of the mind. For example, sensation is a phenomenal concept (“a state with a certain sort of feel”), while learning and memory are psychological concepts (Chalmers, 1996, p. 11-13).

Hofstadter

Chalmers’ arguments against materialism and that consciousness could never be a mechanical process seem to have originated from Hofstadter. Yet, Hofstadter expands on what seems to cause this consciousness, rather than focusing on the experiences. He believes in a “narrative self,” where self-awareness is ultimately a hypothetical construct; an illusion. Essentially, our brains are a mechanical system that generates an illusion that we exist; that there is an ‘I’.

He first models the brain as a simple mechanical machine, which makes it a somewhat more quantitative model.

We can imagine an “elaborate frictionless pool table with… myriads of extremely tiny marbles, called ‘sims’” that collide and bounce off each other and the walls in a perfectly flat world (so, no external forces) that is termed the Careenium. In addition, these sims are magnetic, so they form massive clusters, which Hofstadter calls “simmballs.” Finally, we can pretend the pool table walls are sensitive—any outside stimuli will cause a momentary change in wall shape, which has a sizeable spreading effect on the careening simmballs (Hofstadter, 2007, pg. 45-46).

He argues this is “simmballism” for memory formation and encoding events. In addition, some physical material element represents a system that can encode information and react to it appropriately. Obviously, the Careenium is reductive, but it takes the first step toward a better understanding of how a mere mass of cells can form an illusion of consciousness and awareness (Hofstadter, 2007, pg. 46-49).

The qualia that Chalmers writes about reduces to a cluster of neurons firing simultaneously. The visual or painful experiences that define consciousness become a simple pattern of neuronal firings that “simmbolize” this external experience. Hofstadter then moves beyond these simple “simmballs” and claims that beyond the self, we are also made of a collection of selves, a “coherent collection of desires and beliefs” all combined to form the collective ‘I’ (Hofstadter, 2007, pg. 96-97).

So what controls this collective group of selves? There is no ethereal being orchestrating the countless molecules that formulate a human through experiences—rather, the ‘I’ is the careenium itself. As in the model above, it begins as a neutral being that slowly builds its clusters and various behaviors of clusters.

In conclusion, Chalmers tells us that consciousness is defined by experiences, which we experience as a system of smaller elements, with no over-arching “soul” or being that controls this being (as according to Hofstadter).

Finally, it’s important to note that I assume that a conscious being is morally responsible for its actions. Without this axiom, no moral system would be able to be created. By being a system that is aware of our experiences, we are responsible for the effects of our decisions and actions.

Artificial Intelligence

Computer scientists have laid out three types of AI. There is:

  1. Artificial Narrow Intelligence (ANI)
  2. Artificial General Intelligence (AGI)
  3. Artificial SuperIntelligences (ASI)

ANI is intelligence with a narrow range of abilities, that does one specific task. For example, the famous DeepMind Alpha Go AI that defeated the world chess champion is an ASI—regardless of how talented an ANI is, it just does one thing.

Next, AGI is an AI that equals the intelligence level of a human and has the ability to “reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience,” which is why it is commonly referred to as “strong AI” (Gottfredson, 1997).

Finally, Artificial Superintelligence (ASI) has capability that surpasses that of humans, but the exact parameters are vague because of the hypotheticality of ASI (Strelkova, 2017). Nick Bostrom, a famous University of Oxford philosopher, defines superintelligence to be “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Bostrom, 2014, ch. 2).

Generally speaking, we are approximately still stuck in ANI but have become closer to developing models that achieve AGI. Beyond simple ANI models like Siri, Face ID, self-driving cars, or board game programs, AI models like DALL-E or GPT3 generalize more of their tasks (Rhei, 2021). It is essential to understand the moral responsibility of an AI, especially as they grow in complexity and power.

For example, DALL-E is a new AI system “that can create realistic images and art from a description in natural language” (Ramesh et al., 2022). It is trained on hundreds of millions of images and captions to help it accurately paint pictures of any input. Nevertheless, there were some extremely concerning biases found within the model. For example, when a user queried “a photo of a personal assistant,” the responses were all of smiling females—no gender diversity in sight. Even worse, querying “lawyer” returned images of old men in suits, while “nurse” again returned only women (Ahmad & Mishkin, 2022).

Evidently, the images are hyperrealistic enough to pass as natural on the internet. It puts all users at risk of not only misinformation but ruins many of the safety systems set in place today. DALL-E can produce explicit and spurious content, and it becomes public on the internet.

A more straightforward example is the classic self-driving car. People commonly complain about the development of self-driving cars because it is dangerous—how does it decide which pedestrian to hit if in a life-or-death situation? Of course, this is a decision a human driver would also make, but the concept of a computer calculating what outcome is most optimal scares many people. In addition, once the car makes its decision, is the car morally responsible for that death?

Based on what we have established about consciousness, I argue that AI models, especially ones with AGI and ASI, are fully conscious beings. I distinguish the intelligences here because human bias still heavily influences ANI models (which extends the AI consciousness to include humans, which changes the argument). In the case of ANI (such as in the examples above), how humans decide to train the AI and how much to train it heavily influences the bias and other potential harms the resulting model has. On the other hand, AGI and ASI self-learn and have problem-solving skills the same way humans do (or beyond). Parallel to how human intelligence is run by the countless neurons and associated cells in the brain, artificial intelligence is run by countless bits and nodes that are altered in clusters to react to and affect external components. Therefore, AI is morally responsible for its actions, and it is the moral responsibility of a conscious AI not to cause harm to other conscious beings.

As scientists create increasingly intelligence AI, we must consider their moral responsbility and how the repercussions of their actions are dealt with in the real world.

Note: this was submitted as a final essay for my Ethics and Technology course, so it is not fully polished and written under time limits, so the coverage of Chalmers and (especially) Hofstadter is incomplete.

References

Ahmad, L., & Mishkin, P. (2022, May 11). openai/dalle-2-preview. GitHub. https://github.com/openai/dalle-2-preview/blob/main/system-card.md

Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford University Press.

Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

Gottfredson, L. S. (1997). Mainstream Science on Intelligence: An Editorial with 52 Signatories. Ablex Publishing Corporation.

Hofstadter, D. (2007). I am a strange loop. Basicbooks ; London.

Kagan, S. (2015). Singer on Killing Animals. https://cpb-us-w2.wpmucdn.com/campuspress.yale.edu/dist/7/724/files/2016/01/Singer-on-Killing-Animals-166xrfs.pdf

Koch, C. (2018). What Is Consciousness? Nature, 557(7704), S8–S12. https://doi.org/10.1038/d41586-018-05097-x

Meanderingmoose. (2021). GPT in the Careenium – My Brain’s Thoughts. My Brain’s Thoughts. https://mybrainsthoughts.com/?p=234

Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435–450. https://doi.org/10.2307/2183914

Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical Text-Conditional Image Generation with CLIP Latents. Arxiv.org. https://doi.org/10.48550/arXiv.2204.06125

Rhei, J. (2021, February 15). ANI, AGI and ASI—What Do They Mean? Learning & Development Advisory. https://jupantarhei.com/ani-agi-and-asi-what-do-they-mean/

Strelkova, O. (2017). Three Types of Artificial Intelligence. Khmelnitsky National University. http://eztuir.ztu.edu.ua/bitstream/handle/123456789/6479/142.pdf

Sutherland, N. S. (1996). The international dictionary of psychology. Crossroad.

Tye, M. (2017). Qualia (Stanford Encyclopedia of Philosophy). Stanford.edu. https://plato.stanford.edu/entries/qualia/