Thursday, March 25, 2010

Revisiting the robots




In my last post I outlined the arguments put forward by some artificial intelligence theorists, where they maintained that a reasonably human-like artificial intelligence is about to emerge. I suggested that this is unlikely to happen in the foreseeable future. In this post I am going to explain a little more about why I think it’s not going to happen anytime soon.
AI proponents’ understandings of both human and artificial intelligence tend to be based on the reductionist presuppositions of mechanistic science. I believe that at least some aspects of consciousness do not merely emerge from the physical workings of the brain, but operate through it. In particular, the human capacity for integrated intelligence (the psychic realm, to put it simplistically) most likely operates on non-local principles, transcends linear space-time, probably operates with field properties, and is capable of accessing dimensions of the Kosmos that are not even on the map of modern science. Therefore the mind, at least in part, does not operate in the same space-time as machines do.
There is a simple logical inference that futurist James Martin makes upon which his entire thesis rests. He maintains that omputers will be more intelligent than human beings because their circuitry is “millions of times faster than the neurons and axons of the brain.” Yet his logic is faulted.
Firstly, intelligence researchers have already shown that there is an imperfect correlation between processing speed and human intelligence. People who can process stimuli the fastest don't necessarily score the highest in IQ tests.
If processing speed was what intelligence is all about, cats would be the smartest thing on the planet. Ever seen a cat react? No, the highest expressions of human intelligence are exhibited in human beings of exceptional creativity, insight and motivation. And as David Shenk points out in The Genius I n All of Us, they are typically highly motivated and extremely hard-working. Genius is big picture thinking, and often operates through flashes of inspiration and insight honed by years of relentless, passionate dedication to a subject matter.
Secondly, and most crucially, I believe that the localised model of a machine-brain existing in linear space-time is incorrect. If mind is nonlocal, contains field properties, and cannot be reduced to neural computation, Martin’s and the AI theorists’ thesis disintegrates. How will we create machines that have transpersonal connectivity to an “intelligent” cosmos.
Ray Kurzweil has a quite specific plan as to how develop mechanical “human-level intelligence”. He writes that we have to:
…reverse engineer the parallel, chaotic, self-organizing , and fractal methods used in the human brain and apply these methods to modern computational hardware” (Kurzweil 2005 p 439).
Kurzweil estimates that via this approach, by the year 2025 we will have detailed models and simulations of the processing organs of the brain (Kurzweil 2005 p. 439).
Intelligent design theorist William Dembski criticises the AI theorists like Ray Kurzweil for believing that we can create consciousness from the motions of matter and energy. He writes that materialism is predictable, but “reality” is not (quoted in Kurzweil 2005 p 431). However Kurzweil rightly points out that quantum systems are probably inherently unpredictable; and even if they are not, the behavior of complex system is effectively unpredictable “in practice” (Kurzweil 2005 p431). Therefore, he argues, it is not unfeasible that consciousness can emerge from unpredictable systems.
Dembski argues that an AI which is built up from the material substrate of the world would be “hollow”. Kurzeil responds:
All of the trends show that we are clearly headed for nonbiological systems that are as complex as their biological counterparts. Such future systems will be no more “hollow” than ‘humans’ and in many access will be based on the reverse engineering of human intelligence. We don’t need to go beyond the capabilities of patterns of matter and energy to account for the capabilities of human intelligence (Kurzweil 2005 p 431).
The argument is circular.
Proponent: . Brains are really just biological computers.
Critic:  No they aren’t.
Proponent: Yes they are because they exhibit the following properties which are “mechanical.”

As Kurzwel states, “all the trends” show that we are headed for non-biological intelligence – except of course for all the evidence and arguments which indicates that AI is very different from human intelligence, which we can explain away. Again the assumption is that the brain equals the mind, and operates on mechanistic principals. In short, the brain/mind is a computer.
A further question of course is whether complexity is sufficient to create consciousness. There are three logical possibilities here. The first is that it does not. The second is that it produces a human-like consciousness. The third is that it produces a non-human-like intelligence. Various combinations of two and three are theoretically possible, including replicants (see the movie Bladerunner) and non human-like AI living alongside humans.
Kurzweil believes that “there are no barriers to our discovering the brain’s principles of operation and successfully modeling and simulating them, from its molecular interactions upward” (Kurzweil 2005 p 445). But Kurzweil and other AI optimists are overlooking some properties of consciousness? Kurzweil expands his case:
It is the brain’s relatively fixed architecture that is severely limited. Although the brain is able to create new connections and neurotransmitter patterns, it is restricted to chemical signaling more than one million times slower than electronics, to the limited number of interneuronal connections that can fit inside our skulls, and to having no ability to be upgraded, other than through the merger with nonbiological intelligence… (Kurzweil 2005 p 445).
    I believe that Kurzweil is incorrect on a number of fronts here.
As stated, the mind is not only located inside the skull. The extended mind and integrated intelligence not only operate on holistic properties within the brain, but extend beyond it. Further, much of my own experience leads me to conclude that the mind also extends into other dimensions of consciousness, and survives the death of the physical body. These other dimensions are either non­physical, or are ‘physical’ in a way that is quite unlike the human physical form. This claim has enormous implications for human futures. (I cannot expand on this here, but more on that in later posts)
The question of ‘upgrading’ also needs to be qualified. In a certain sense, there are brain upgrades. For there are changes in consciousness fields as an individual ascends these levels. These have been well mapped by mystics over many centuries. Ken Wilber and David E. Hawkins (see Power vs Force) are two such researchers. Hawkins model of consciousness evolution is based on personal experience, and correlates well with what has been reported by mystics and first-person consciousness investigators for millennia. Despite the problems with these claims, the data cannot be ignored.
Returning to our Robot psychiatrist, given that he/she will not have the properties of human consciousness in the foreseeable future, I suspect that not too many people will be able to empathise with it at a deep level. The intimate mind to mind connection that underpins deep empathy will be absent. If human minds are non-locally connected, and this is a key part of empathy, then genuine empathy may occur only between sentient human beings.
Marcus

No comments:

Post a Comment