Source:
Androids, it seems, have appearance in the bag. But is their intelligence only skin-deep? Peter Spinks finds sharp divisions over the likely future of robotic intellect. Telling Hiroshi Ishiguro apart from his robotic replica can be tough. The professor's silicon-brainchild, cast from plastic and plaster moulds of his face and body and implanted with some of his sweeping black locks, is as realistic as a Madame Tussaud's waxworks model - down to the frown on his broad forehead. The android, Geminoid H1-1, mimics its master's posture, lip movements and facial gestures - even his fidgety fingers and feet. It blinks, seems to breathe and sounds, for all the world, like its human counterpart. Remote-controlled Geminoid stands in when the busy professor doesn't have time to lecture and presents a compelling scripted talk. But when a member of the audience asks a searching question, Geminoid smiles blandly at the inquirer and says nothing. And when people get up to leave, the android sits glued to the chair: its legs, designed only for micro-movements, can't stand or walk. Androids - robots that resemble humans - are increasingly popular exhibits at robotics conferences and trade shows worldwide. Unlike humanoids, which have two arms and two legs but look more like machines than people, androids are appealing because they seem so much like us. Enticed by research suggesting that people relate better to robots the more they resemble humans, roboticists are developing androids that one day might assist in aged care and eventually supersede servile robotic home helps, such as automated floor-crawling vacuum cleaners. "The appearance of androids is important and we cannot ignore its effect in communication," says Professor Ishiguro, a pioneer in android science, which combines cutting-edge research in robotics with slow but sure advances in cognitive psychology. "My purpose is to understand humans by building androids... The practical use of androids is a kind of by-product." Until recently, roboticists had not taken them seriously, regarding androids largely as electronic puppets. Most of the technical effort went into humanoids, which might vacuum the house or mow the lawn, and service or assembly-line robots, which perform specific tasks very well, says Ray Jarvis, director of the Intelligent Robotics Research Centre at Monash University in Melbourne. "Androids like Geminoid and Repliee have been designed the other way around - they look the piece, but are much more limited in what they can do." Androids cannot, for example, make a cup of tea, do the housework or explore their environment, says Alan Winfield, Professor of Electronic Engineering at the University of the West of England, in Bristol. "They are not autonomous - relying on a power feed or batteries. They can't repair themselves or reproduce. They are not conscious or sentient. They can't reason about themselves or the world. I could go on - in short, current androids can't do very much at all and fall far, far short of the androids we see in the movies." This sober but realistic view is widely held. Roboticist Phillip McKerrow, from Wollongong University's School of Computer Science and Software Engineering, sums up present android capabilities thus: "They are basically expensive toys with limited dexterity and minimal intelligence." Put another way, current androids are "a bit like a copy of Microsoft Word on wheels," says Brett Feldon, the chief technology officer at Sydney-based VeCommerce, which provides voice self-service and speaker verification systems. That androids still reflect a triumph of form over substance is illustrated by another Ishiguro creation, a female android dubbed Repliee Q2, which shares many of Geminoid's drawbacks. Developed by Osaka University's Department of Adaptive Machine Systems in conjunction with a Tokyo-based animatronics maker, Kokoro, Repliee was originally modelled on the professor's daughter when she was four years old. The android has undergone several upgrades and now bears a striking resemblance to a rather fetching young Japanese woman. Its movements - orchestrated by a series of 42 air-powered actuators, including 13 in its head - are almost as free and smooth as a human's. It appears to breathe and can flutter its eyelids. Omnidirectional cameras recognise gestures, such as a raised hand, and track human facial movements. Delicate physiological sensors embedded in Repliee's pliable silicone skin detect touch and other sensations. Microphones, linked to a voice-recognition system, allow it to hear and respond to human speech. But here lies the rub: while the voice is realistic enough, its speech is little more convincing than that of a voice-activated telebanker. Like Geminoid, Repliee can reply with only a dozen or so words and, as with other voice-recognition devices, performs poorly in noisy places. The inability of androids to communicate intelligently with humans is their biggest bugbear. They may speak in limited ways on specific topics, but cannot converse widely and certainly not on abstract subjects such as philosophy. In short, they're a long way from passing the legendary Turing test, described in 1950 by British mathematician Alan Turing. It requires a human judge to converse separately with a person and a computer - not knowing which is which. If the judge cannot tell the two apart, the machine passes the test. "Today's robots wouldn't pass the Turing test because they communicate by matching patterns of speech," explains Professor Robert Dale, director of Macquarie University's Centre for Language Technology, in Sydney. All is scripted - there's no reasoning going on, and certainly no understanding, he says. This much was evident at a recent international robotics conference, ICRA 07, in Rome, where mobile robots mingled with delegates and reacted in simple ways - but showed no understanding. "They were programmed to do something in response to a sensor input," says associate Professor McKerrow, who attended the conference. "One would say 'ouch' when someone kicked its bumper. Another would say a sentence when it detected a face . . . I saw a four-year-old boy trying to talk to this robot and getting very frustrated. He was saying: 'Remember me, you talked to me a while ago."' Despite androids' limited communication skills, their physical make-up and prowess - from skin to limbs, muscles to motion - are progressing in leaps and bounds. At present, android skin, like Repliee's, comprises silicone, generally embedded with touch sensors. Their limbs are often made of aluminium. In future, limbs will more than likely consist of carbon fibre, in the way of squash and tennis racquets, suggests Professor Jarvis. Artificial muscles, another key area of research, presently tend to consist of dozens, sometimes hundreds, of electric motors with torque feedback, enabling their joints to seem compliant. Some robots are connected to compressors that power "air muscles" - rubber tubes that contract when high-pressure air is blown into them. When the air is released, the tube relaxes and lengthens again. Trials are under way into alternative novel materials, including Nitinol wire, a strong but lightweight shape-memory alloy of nickel and titanium, and electro-elastic polymers, which stretch and contract. (Auckland University successfully demonstrated the latter at last year's Australasian Robotics Conference.) Particularly encouraging is research into artificial muscles comprising sheets of carbon nanotubes - big cylindrical molecules of pure carbon with unusual electrical and mechanical properties. When an electrical voltage is applied gradually, ions inside the carbon move to one side, bending the nanotube sheets. The speed and extent to which the sheets bend depend on how fast and by how much the power is increased. When the power is switched off, the sheets unbend and return to their original shape. This allows the artificial muscles to work like humans' opposing muscle groups. Some of these materials might be used for android fingers in the future, predicts Professor Jarvis, who favours Nitinol, which bends or stretches when heated by an electric current. MIT researchers in the United States, on the other hand, are developing skin that senses, and can stop, something slipping through an android's fingers. Processors and operating systems powering androids vary widely. Some rely on conventional, personal computer-style pentium processors of one gigahertz and upwards, although a few state-of-the-art models have customised processors of the kind found in calculators and specialised appliances. Most run on real-time operating systems such as RT-Linux. A few androids communicate wirelessly through bluetooth with a central computer. Many work on mains power. But a few are autonomous. Britain's Bristol Robotics Laboratory, for example, runs some of its prototype robots on microbial fuel cells, which convert chemical energy into electricity. Although the devices are not androids, and move very slowly, they extract power from biomass - such as rotting fruit and other foods. Not to be outdone by the British, Austria's Humanoid Robotic Laboratory in Linz has built a "Barbot", which buys beer at the bar and drinks it. The plan is to convert the beer into power to run the robot. How might tomorrow's androids function? In years to come, roboticists expect that advanced systems - incorporating neural networks, genetic algorithms and fuzzy logic - will run on an assortment of very small, very fast processors, each performing a specific task in parallel and communicating simultaneously over lightning-fast networks. One or more master processors will probably synchronise and co-ordinate the maze of brain and body functions. Mr Feldon submits that android brains might one day rely on quantum computing power. "There are proposals for quantum computing based on silicon, as well as other physical systems - for example trapped ions, superconductors or molecules," says theoretical physicist Dr Ron Burman, of the University of Western Australia in Perth. Professor McKerrow expects that what a robot will be built to do "will be determined by three factors: technological capability, task requirements and cost". Aside from such practical considerations, might androids, in the dim distant future, become inseparable from human beings? Are conscious androids feasible? The December 2006 issue of the journal Cognitive Processing reported that French researchers, led by Dr Alain Cardon, had proposed to develop software producing a level of artificial consciousness. Is this pie in the sky? Or could silicon some day supplant carbon-based life forms, such as us? In part, this depends on whether one takes the view of American philosopher Daniel Dennett, who believes that humans are immensely complex and sophisticated computational machines. For him, brute force computing power might eventually mimic the human mind. "Dennett says some eminently sensible things about consciousness and some silly things," says internationally renowned physicist Paul Davies, now based at Arizona State University. "Regarding consciousness as nothing but a vast amount of digital information processing seems to me to be a completely unjustified claim." Professor Davies feels that "the fixation with the digital computer as a model of human consciousness," as he puts it, "is holding up our understanding of the subject". The comparison, he says, "is akin to studying cartoon characters to obtain insights into biology. As the philosopher John Searle asks of consciousness: 'How does the brain do it?' The truth is, we haven't a clue. But however it does it, I am sure it involves more than merely flipping lots of informational bits very rapidly, which is what proponents of the digital computer route to artificial intelligence suggest." Mr Feldon agrees: "In my view, we're as far away from artificial intelligence now as we thought we were in the '60s." Professor Catherine Pelachaud, a computer scientist at France's University of Paris, is even more sceptical. "Humans are still more intelligent - they have feeling, they have consciousness. Robots are far from fulfilling these qualities. I do not believe they will have them one day. Moreover, I am not sure this would be a good idea," she says. A promising area of research, already under way at some labs, involves equipping robots with mood-detection software and giving them rudimentary forms of social and emotional intelligence. This might help them match a person's emotional state. Endowing them with emotions may be essential if androids are ever to become sentient, artificially intelligent machines (in which case they may no longer be classifiable as machines). As Mr Feldon reminds us: "The neuroscientist Antonio Damasio argues that, in humans, reason and emotion are inextricably linked. Maybe that's going to be true for artificial intelligence, too." Research is also intensifying into ways to allow robots to learn for themselves. Some believe this will be necessary to pass the Turing test. All of this leads to an intriguing question. Might an android, which has acquired the ability to learn for itself, and been seeded with basic emotions, one day surprise its builders by taking on unexpected qualities and behaviours - such as some human virtues and vices? "It is certainly possible for a complex system to exhibit emergent behaviour - that is, to display new and unexpected properties, which appear abruptly above a certain threshold of complexity," says Professor Davies. "What is less obvious is that such behaviour would be remotely human-like. Human nature is not a result of the spontaneous emergence of novelty, but rather the product of a long period of evolutionary selection and adaptation, so it is fashioned to a great extent by the environment." Given long enough, however, it is conceivable that androids might one day be made to think. If that happened, and it is admittedly a big if, they could perhaps evolve in ways that today's roboticists hadn't imagined - even in their wildest droid dreams. BREAKING FREE Isaac Asimov, the Russian-born writer and biochemist, proposed three rather rigid robotic rules, or laws. The first states that robots should not harm humans or allow them to be harmed; the second that robots should always obey humans (unless that infringes law one) and the third law says robots should always protect themselves (unless that infringes laws one or two). Shuji Hashimoto, who directs the humanoid robotics centre at Tokyo's Waseda University, feels that Asimov's laws could inhibit the potential of androids for self-development. A robot's intelligence, he believes, should grow as it ages, learning through trial and error - in the way that a child matures. In September 2006, New Scientist reported Dr Hashimoto as saying that robots, at present, were "merely play-acting and faking feelings . . . they are not machines with a heart; they just look like they have a heart". As long as robots obey Asimov's laws, "we will never have a machine that is a true partner for a human", he told the magazine. For him, "humans should not stand at the centre of everything. We need to establish a new relationship between human and machine."
Fausto Intilla
No comments:
Post a Comment