Wednesday, December 17, 2014

Consciousness and Artificial Intelligence

There is currently a bustle in neuroscience, artificial intelligence, philosophy and science fiction regarding the possibility of creating intelligent machines. As mentioned earlier, I believe that they will become a reality sooner or later. Predictably, the philosophers are muddying the water by trying to throw in obsolete concepts of consciousness and mind. I have no interest in examining their arguments closely, but I do have thoughts on the subject that clarify the issues somewhat, to me anyway.

Ever since Descartes there has been the so-called mind-body problem, which, inexplicably to me, arouses great concern among philosophers. In philosophy, the trick is to describe the nature of something that seems to exist independently of the physical world yet interacts with it. My approach is to say that "mind" is simply a linguistic representation of what we also call a conscious human brain. "Consciousness" itself is just a word that designates our subjective sense of awareness of our own existence within an environment. Some philosophers believe that there are inherent obstacles to creating artificial intelligence that is conscious. Machines have been made that can win at chess and Jeopardy, but they don't have minds and are not conscious according to our definitions. Typically these philosophers say that we don't know enough about how the human brain works to simulate it with computers. Although they may not be completely wrong, I think they make fundamental mistakes about the nature of consciousness and intelligence.

To my way of thinking, many creatures with brains are technically conscious. While most of their biological processes occur automatically according to their genetic and environmental histories, they have a sense of self, if only for the purpose of self-preservation. It seems to me that humans aren't that different from most mammals as far as consciousness is concerned. In terms of gross biological behavior, we're similar to mice: we seek food and shelter, mate, raise offspring, look for new sources of food and shelter when necessary, etc. We more closely resemble successful predators that engage in social behavior, such as wolves. The top predators generally have the ability to assess situations, do rudimentary planning, change plans when necessary and function within a social milieu. I don't think that there is much meaning to be attached to the word "consciousness" beyond a fairly narrow biological context that we share with other animals.

When you come right down to it, the things that differentiate us from other sophisticated mammals are a larger, more specialized brain, bipedal locomotion, hands capable of skillfully manipulating objects, complex language and a predisposition to eusociality. We pride ourselves in our reasoning ability, appreciation of the arts, etc., but these are really just consequences of the other characteristics mentioned. I think that human anthropocentric arrogance causes us to overvalue our status within the natural world, and now there are those among us who want to extend that logic to machines, which, somehow or other, are supposed to be incapable of matching our abilities.

The higher functions of our brains are hard to duplicate in machines, because they came into existence over billions of years of evolution, and our genes have so much junk in them, i.e. inactive DNA, that it is difficult to see how they end up producing thoughts. Additionally, as organisms, how we take in information from our environment is different from how machines typically accumulate information. Perhaps the largest challenge is the fact that, as inanimate objects, computers do not contain a protocol that makes their continued survival a priority, whereas most of our behavior is directly or indirectly related to our instincts to survive as individuals and as a species. However, I think it may prove to be a mistake to try to make machines that think like humans. Two problems with that are A. We don't know exactly how we think, and B. Computers may be able to think better using a model other than humans.

From a psychological and marketing standpoint, intelligent machines similar to us may seem advantageous: we inherently prefer human-like interface to robotic interface. But presuming that we can build super-intelligent computers, that appearance could be created through the use of sub-routines that simulate human behavior. To have a truly human super-intelligent computer could actually be extremely dangerous. The last thing that we could ever want is an entity that is more intelligent than we are and interested in its continued existence and reproduction. This is the kind of scenario often depicted in science fiction movies, and it could potentially occur if we made the wrong design decisions. It may prove to be easier and less dangerous to make super-intelligent machines that are based exclusively on theoretical considerations that ignore the characteristics of organisms like us.

I can't say how or when these machines will actually be designed and built, but I want to emphasize that whether or not they are conscious may be irrelevant. They will be taking in information and processing it, two things that we do, but will not necessarily need to be self-aware, i.e. conscious. Perhaps they will have what we call creativity, which in their case may mean that they will be able to find new ways of analyzing phenomena and novel solutions to problems that are of interest to us. They could be designed to communicate with us in a normal manner, like HAL in 2001: A Space Odyssey, while being completely nonhuman inside. If consciousness is associated with a drive to self-preservation, it would be better if they weren't.

My thought regarding intelligence is that we overrate ours considerably. This is only because we have not confronted intelligence greater than our own. Research on human behavior consistently shows that we frequently make poor assessments of situations and bad decisions. Men as a group tend to be unrealistically overconfident in many scenarios. There probably are evolutionary reasons for many of our characteristics that will not apply to super-intelligent machines, and that will be one of their strengths.

The sense I get is that when we encounter super-intelligence the world will be changed forever. Our chronic case of anthropocentrism will be cured when we have machines that can do nearly everything better than we can. Thus, from a long-term policy standpoint, I think our governments and universities are rather shortsighted. For example, although in the near future there may be some advantages to be gleaned from providing better educations to more people, it is possible that nearly all jobs will eventually become automated. In the short term, better education might help increase equality globally, but I don't think it will make much difference longer-term. It may turn out that prevention of the abuse of future technology and management of the transition to a technologically advanced post-capitalist society are far more important than any issues currently under consideration on a global level.

3 comments:

  1. The Big Picture - Pt. 3 Consciousness: http://youtu.be/mHDELxrWoUE via @YouTube
    Hi Paul I think this short video is saying what you are, to some degree. Can you please watch it and let me know, thanks…Teresa

    ReplyDelete
  2. Hi Teresa,
    Lorax2013's version of consciousness is a little different from mine, as he seems to view it as an anti-entropy process that works in the opposite direction from material processes. That part, I think, is incorrect. I see consciousness as nothing more than a subjective self-awareness that exists mainly in animals but that could possibly also be seen in other organisms. Basically it is a sort of control center that interacts with the environment and ensures survival. I agree with him that computers can think without being conscious, and that was one of the points in my post. Unlike him, I don't see consciousness as something special in the universe - it is just a way of describing certain life forms. Where I agree with him most is in the area that I call eusociality. In certain respects we are like a colony of ants that works collectively for the benefit of the group. Individuals and their ideas are of less importance than many people think. So, in sum, I think he gets consciousness wrong, perhaps ascribing a certain magic to it that doesn't exist in my opinion, while being more or less right about how humans really function individually and collectively.

    ReplyDelete
  3. Thanks for your review it helps. I did like the ending where he says that ppl get into streams of the collective mind and that what you think is what you become. Common sense I suppose but I do think you have to sometimes step back and say is this right or good for me. I think ppl get drawn into big overall hypes of celebrities, politicians, material goods, philosophies you name it and they are just so deeply grooved into these thoughts and for what purpose. Anyway merci encore.

    ReplyDelete

Comments are moderated in order to remove spam.