The Circuits of Our Consciousness

The Circuits of Our Consciousness

circuits-ft-final.jpg

On October 25th, at the Future Investment Initiative tech summit, a robot named Sophia announced the recent citizenship granted to her by the Saudi Arabian government. Sophia is a robot with artificial intelligence geared mainly towards speech, and a “body” capable of mimicking a wide variety of human emotions. With this development, numerous questions seem to be posed for us: To what degree can we call an artificially created being human? Do they deserve human rights? What place do these new beings hold in our society? Should we even be trying to create machines with human attributes? Some of these questions come down to practicality. For instance, regardless of whether we should be making machines in our own image, it is unlikely that an article written against such a practice will have any effect. It would be prudent, however, to examine the status of these new beings with an eye to the findings of philosophy of consciousness.

Considering how important consciousness is to how we live our lives, especially in relation to how we treat one another, it is remarkable how poorly consciousness is understood. Currently, there are no existing widely accepted arguments for how subjective experience emerges or interacts with deterministic matter. Philosophers of mind who put forth such theories often fall prey to a variety of fallacies which go beyond the scope of this article; suffice it to say, that even though some theories do exist, there is little consensus. This said, many scholars contend that there is some necessary connection between the two, but how or where it appears remains a mystery.

At the same time, we do seem to have nagging intuitions about when we are dealing with a conscious object and when we are not. Further, we often have strong convictions about the level of consciousness of a given object, and how that implies we should interact with it. For instance, we have no moral issue shutting off our computer at the end of the day, or leaving it alone for days at a time, yet treating a living being in this manner is unthinkable. Our understanding of levels of consciousness affects how we behave: hacking a tree down to make a log cabin seems to require less remorse than killing an animal for food. We don’t grant human rights to just anything with consciousness - other animals and plants are conscious, and even the most fanatical PETA reps still tend to value humans over cockroaches.

To be clear, Sophia is very far from emulating anything that we should call human or even conscious. When it comes down to it, she’s little more than a highly sophisticated chat-bot. Anyone claiming that she has a subjective human-like experience would have to provide an argument for how consciousness could emerge from pure algorithmic inference. Everything that she produces is arguably put there by a human, even though she has a limited ability to learn and respond dynamically, she has no identifiable will of her own. The learning, varied responses, and facial expressions are all put there by her human creators.

Which brings us to another reason we tend to value human life: its uniqueness. At this point in time, there exists no way of making a copy of a person. While it is theorized that it might one day be possible to “digitize” the contents of a brain, this is still well beyond reach. Only one instance of human is identical to itself, whereas Sophia, even though she learns in a certain sense, can always be copied and reproduced.

On the other hand, the fact that we seem to grant some form of consciousness to such diverse life forms makes the problem more complex. Any theory matching brain states to subjective experiences has to contend with the radical biological difference in life forms such as Octopodes that have no centralized brain, but instead, a multiplicity of them! The point here is that since we do not know what exactly gives rise to consciousness, it would be rash to assume that just because computers do not function nearly in the same way our brains do, they can’t be conscious. We don’t have a good argument for why consciousness can’t emerge from mechanistic processes, even though assuming otherwise might lead us into similarly dubious positions such as pan-psychism (the theory that consciousness exists to a degree at all levels of being), or dualism (the theory that consciousness has an ontological quality that pairs with material existence).

As it stands, we do not have a reliable way to determine whether a given object is conscious or not, and deciding whether to treat an artificial intelligence like a living being seems to hang on this distinction. It seems that none of the arguments from analogy carry enough weight with them to compel us to exclude artificial intelligence necessarily from the possibility of consciousness, though it is pretty certain that Sophia lacks it. It may be that we will have to create new categories for advanced artificial intelligence, but they will not resemble the way we categorize any living or non-living beings currently. For now, perhaps it is best for us to err on the side of caution, like the Saudis, as it would be less harmful to treat a possibly non-conscious being as a conscious one, than cause real suffering to a conscious being because we haven’t sorted out the philosophical problem of consciousness.

The Issue of Free Speech on University Campuses

The Issue of Free Speech on University Campuses

Canada's Budget and the TWU Student

Canada's Budget and the TWU Student