Wednesday, August 22, 2007

The World's First Mode-locked Silicon Evanescent Laser Built

Source:

Science Daily — Researchers at UC Santa Barbara have announced they have built the world's first mode-locked silicon evanescent laser, a significant step toward combining lasers and other key optical components with the existing electronic capabilities in silicon. The research provides a way to integrate optical and electronic functions on a single chip and enables new types of integrated circuits. It introduces a more practical technology with lower cost, lower power consumption and more compact devices.
Mode-locked evanescent lasers can deliver stable short pulses of laser light that are useful for many potential optical applications, including high-speed data transmission, multiple wavelength generation, remote sensing (LIDAR) and highly accurate optical clocks.
Computer technology now depends mainly on silicon electronics for data transmission. By causing silicon to emit light and exhibit other potentially useful optical properties, integration of photonic devices on silicon becomes possible. The problem in the past? It is extremely difficult, nearly impossible, to create a laser in silicon.
Less than one year ago, a research team at UCSB and Intel, led by John Bowers, a professor of electrical and computer engineering, created laser light from electrical current on silicon by placing a layer of InP above the silicon. In this new study, Bowers, Brian Koch, a doctoral student, and others have used this platform to demonstrate electrically-pumped lasers emitting 40 billion pulses of light per second. This is the first ever achievement of such a rate in silicon and one that matches the rates produced by other mediums in standard use today. These short pulses are composed of many evenly spaced colors of laser light, which could be separated and each used to transmit different high-speed information, replacing the need for hundreds of lasers with just one.
Creating optical components in silicon will lead to optoelectronic devices that can increase the amount and speed of data transmission in computer chips while using existing silicon technology. Employing existing silicon technology would represent a potentially less expensive and more feasible way to mass-produce future-generation devices that would use both electrons and photons to process information, rather than just electrons as has been the case in the past.
Reference: "Mode-Locked Silicon Evanescent Lasers," Optics Express, September 3, 2007, Vol. 15, Issue 18.
This research builds upon the development of the first hybrid silicon laser, announced by UCSB and Intel a year ago, enabling new applications for silicon-based optics.
Note: This story has been adapted from a news release issued by University of California - Santa Barbara.

Fausto Intilla
www.oloscience.com

Tuesday, August 21, 2007

Rocket-powered Mechanical Arm Could Revolutionize Prosthetics


Science Daily — Combine a mechanical arm with a miniature rocket motor: The result is a prosthetic device that is the closest thing yet to a bionic arm.
A prototype of this radical design has been successfully developed and tested by a team of mechanical engineers at Vanderbilt University as part of a $30 million federal program to develop advanced prosthetic devices.
"Our design does not have superhuman strength or capability, but it is closer in terms of function and power to a human arm than any previous prosthetic device that is self-powered and weighs about the same as a natural arm," says Michael Goldfarb, the professor of mechanical engineering who is leading the effort.
The prototype can lift (curl) about 20 to 25 pounds — three to four times more than current commercial arms — and can do so three to four times faster. "That means it has about 10 times as much power as other arms despite the fact that the design hasn't been optimized yet for strength or power," Goldfarb says.
The mechanical arm also functions more naturally than previous models. Conventional prosthetic arms have only two joints, the elbow and the claw. By comparison, the prototype's wrist twists and bends, and its fingers and thumb open and close independently.
The Vanderbilt arm is the most unconventional of three prosthetic arms under development by a Defense Advanced Research Project Agency (DARPA) program. The other two are being designed by researchers at the Advanced Physics Laboratory at Johns Hopkins University in Baltimore, who head the program. Those arms are powered by batteries and electric motors. The program is also supporting teams of neuroscientists at the University of Utah, California Institute of Technology and the Rehabilitation Institute of Chicago who are developing advanced methods for controlling the arms by connecting them to nerves in the users' bodies or brains.
"Battery power has been adequate for the current generation of prosthetic arms because their functionality is so limited that people don't use them much," Goldfarb says. "The more functional the prosthesis, the more the person will use it and the more energy it will consume."
Increasing the size of the batteries is the only way to provide additional energy for conventionally powered arms and, at some point, the weight of additional batteries becomes prohibitive.
It was the poor power-to-weight ratio of batteries that drove Goldfarb to look for alternatives in 2000 while he was working on a previous exoskeleton project for DARPA. He decided to miniaturize the monopropellant rocket motor system that is used for maneuvering in orbit by the space shuttle. His adaptation impressed the Johns Hopkins researchers, so they offered him $2.7 million in research funding to apply this approach to the development of a prosthetic arm.
Goldfarb's power source is about the size of a pencil and contains a special catalyst that causes hydrogen peroxide to burn. When this compound burns, it produces pure steam. The steam is used to open and close a series of valves. The valves are connected to the spring-loaded joints of the prosthesis by belts made of a special monofilament used in appliance handles and aircraft parts. A small sealed canister of hydrogen peroxide that easily fits in the upper arm can provide enough energy to power the device for 18 hours of normal activity.
The first prototype, which took a year to develop, was powered by "cold gas": compressed nitrogen. It allowed the researchers to test the fundamental design and to address the basic problems of control, leakage and noise. The team was happy to discover that they could solve all of the basic problems by designing the valves with the highest precision possible, with clearances of 50 millionths of an inch.
"There are only a handful of machinists who can make valves with this precision. We found one and asked him to make them with the highest precision possible, which is actually higher than he can measure," says Goldfarb. "Normally in projects like this the surprises are unpleasant, but this was a pleasant one. The valves didn't leak, click or hiss!"
After getting the arm working with cold gas, the engineers tore it down and rebuilt it to operate on "hot gas" — steam that is heated to 450 degrees Fahrenheit by the hydrogen peroxide reaction.
One of their immediate concerns was protecting the wearer and others in close proximity from the heat generated by the device. They covered the hottest part, the catalyst pack, with a millimeter-thick coating of a special insulating plastic that reduced the surface temperature enough so it was safe to touch. The hot steam exhaust was also a problem, which they decided to handle in as natural a fashion as possible: by venting it through a porous cover, where it condenses and turns into water droplets. "The amount of water involved is about the same as a person would normally sweat from their arm in a warm day," Goldfarb says.
To allow for thermal expansion, the engineers replaced the arm's nine valves with a set machined to a slightly lower tolerance, approximately 100 millionths of an inch. But when they began operating the rebuilt arm, they found that it hissed and leaked. At first, they thought that the arm had only a single leak, and spent several weeks trying to track it down. Finally, they realized that the noise and leakage were coming from all the valves. Replacing the high-precision valve set solved the problem. "We were astonished at by the difference between 50-millionths and 100-millionths: It made all the difference in the world," says Goldfarb.
Their biggest problem operating with hot gas turned out to be finding belt material that was strong enough and could withstand the high temperatures involved. They tried silk surgical sutures, but found that silk wasn't strong enough. They tried nylon monofilament, which is stronger than steel, but it couldn't take the heat. Finally, after a long process of trial and error, they found a material that works: the engineering thermoplastic polyether ether ketone(), .
The engineers solved these and a number of other smaller problems and got the second prototype working properly by the end of June.
In the fall, DARPA's "Revolutionizing Prosthetics 2009" program will move to its second stage. Even though his team has met all its research milestones and has produced a working prototype, Goldfarb is not certain that it will be funded for the new stage. "DARPA has set a goal of developing a commercially available arm in two years. Because of our novel power source, the process of proving that our design is safe and getting regulatory approval for its use will probably take longer than that," he says.
If DARPA decides it cannot continue supporting the arm's development for this reason, Goldfarb is confident that he can get alternative funding. "We have made so much progress and gotten such positive feedback from the research community that I'm certain we'll be able to keep going," he says.
Note: This story has been adapted from a news release issued by Vanderbilt University.

Fausto Intilla
www.oloscience.com

New Method For Mass Production Of Nanogap Electrodes Developed

Source:
Science Daily — Researchers at the University of Pennsylvania have developed a reliable, reproducible method for parallel fabrication of multiple nanogap electrodes, a development crucial to the creation of mass-produced nanoscale electronics.
Charlie Johnson, associate professor in the Department of Physics and Astronomy and the Department of Materials Science and Engineering at Penn, and colleagues created the self-balancing single-step technique using feedback controlled electromigration, or FCE. By using a novel arrangement of nanoscale shorts they showed that a balanced self-correcting process occurs that enables the simultaneous electromigration of sub-5 nm sized nanogaps. The nanogaps are controllably formed by carefully applying an electric current which pushes the atoms of the metallic wire through the process of electromigration.
In the study, the researchers described the simultaneous self-balancing of as many as 16 nanogaps using thin sheets of gold and FCE methodology originally developed at Penn. Using electron-beam lithography, Penn researchers constructed arrays of thin gold leads connected by narrow constrictions that were less than 100 nm in width. Introducing a voltage forced electrons to flow through these narrow constrictions in the gold, meeting with greater resistance as each constriction narrowed in response to electromigration.
The narrower the constriction, the more the electrons were forced to the other, wider constrictions, in order to take a path of least resistance. This balanced interplay ensured that the electromigration process occurred simultaneously between the constrictions. After a few minutes, the applied electrons narrowed the constrictions until they opened to form gaps of roughly one nanometer in size with atomic-scale uniformity. By monitoring the electric-current feedback, researchers could adjust the size of the nanogaps as well.
Nanotechnology shows promise for revolutionizing materials and electronics by reducing the size and increasing the functionality of new composite materials; however, creating these materials is time consuming and costly, and it requires precise control at the atomic level, a scale that is difficult or impossible to achieve with current technology.
During the last several years there has been progress towards developing single nanometer-sized gaps and nanodevices. Yet their extremely low reproducibility has hindered any real chance of their use on the industrial scale, which is crucial to the development of the complex circuits that would be required to build, for example, a computer out of nanoelectronics.
"Reproducibility is one of the major issues facing nanotechnology, and it's required us to depart from the standard ways of achieving this in micro-electronics processing." Said Douglas Strachan of the Department of Materials Science and Engineering and the Department of Physics and Astronomy at Penn. "When you first hear of opening up a wire with a current, you usually think of a fuse. To think that this sort of technique could actually lead to atomically-precise nanoelectronics is sort of mind blowing."
Danvers Johnston of the Department of Physics and Astronomy said, "Since it is impossible to mold nanoscale-size objects with any other lab tools, we direct the electrons to get them to do the work for us."
The research was performed by Johnson, Strachan, and Johnston and the findings appear online in the journal Nano Letters.
The research was supported by the National Science Foundation.
Note: This story has been adapted from a news release issued by University of Pennsylvania.

Fausto Intilla
www.oloscience.com

Friday, August 17, 2007

Nanoscale Blasting Adjusts Resistance In Magnetic Sensors


Source:

Science Daily — A new process for adjusting the resistance of semiconductor devices by carpeting a small area of the device with tiny pits, like a yard dug up by demented terriers, may be the key to a new class of magnetic sensors, enabling new, ultra-dense data storage devices. The technique demonstrated by researchers at the National Institute of Standards and Technology (NIST)* allows engineers to tailor the electrical resistance of individual layers in a device without changing any other part of the processing or design.
The tiny magnetic sensors in modern disk drives are a sandwich of two magnetic layers separated by a thin buffer layer. The layer closest to the disk surface is designed to switch its magnetic polarity quickly in response to the direction of the magnetic "bit" recorded on the disk under it. The sensor works by measuring the electrical resistance across the magnetic layers, which changes depending on whether the two layers have matching polarities.As manufacturers strive to make disk storage devices smaller and more densely packed with data, the sensors need to shrink as well, but current designs are starting to hit the wall. To meet the size constraints, prototype sensors measure sensor resistance perpendicular to the thin layers, but depending on the buffer material in the sensor, two different types of sensors can be made. Giant magneto-resistance (GMR) sensors use a low-resistance metal buffer layer and are fast, but plagued by very low, difficult to detect, signals. On the other hand, magnetic tunnel junction (MTJ) sensors use a relatively high-resistance insulating buffer that delivers a strong signal, but has a slower response time, too slow to keep up with a very high-speed, high-capacity drive.What's needed, says NIST physicist Josh Pomeroy, is a compromise. "Our approach is to combine these at the nanometer scale. We start out with a magnetic tunnel junction--an insulating buffer--and then, by using highly charged ions, sort of blow out little craters in the buffer layer so that when we grow the rest of the sensor on top, these craters will act like little GMR sensors, while the rest will act like an MTJ sensor." The combined signal of the two effects, the researchers argue, should be superior to either alone.The NIST team has demonstrated the first step--the controlled pockmarking of an insulating layer in a multi-layer structure to adjust its total resistance. The team uses small numbers of highly charged xenon ions that each have enormous potential energies--and can blast out surface pits without damaging the substrate. With each ion carrying more than 50 thousand electron volts of potential energy, only one impact is needed to create a pit--multiple hits in the same location are not necessary. Controlling the number of ions provides fine control over the number of pits etched, and hence the resistance of the layer--currently demonstrated over a range of three orders of magnitude. NIST researchers now are working to incorporate these modified layers into working magnetic sensors.The new technique alters only a single step in the fabrication process--an important consideration for future scale-up--and can be applied to any device where it's desirable to fine-tune the resistance of individual layers. NIST has a provisional patent on the work, number 60,905,125.* J.M. Pomeroy, H. Grube, A.C. Perrella and J.D. Gillaspy. Selectable resistance-area product by dilute highly charged ion irradiation. Appl. Phys. Lett. 91, 073506 (2007).
Note: This story has been adapted from a news release issued by National Institute of Standards and Technology.

Fausto Intilla

Thursday, August 16, 2007

The Hiroshi Ishiguro's Androids


Source:
Androids, it seems, have appearance in the bag. But is their intelligence only skin-deep? Peter Spinks finds sharp divisions over the likely future of robotic intellect. Telling Hiroshi Ishiguro apart from his robotic replica can be tough. The professor's silicon-brainchild, cast from plastic and plaster moulds of his face and body and implanted with some of his sweeping black locks, is as realistic as a Madame Tussaud's waxworks model - down to the frown on his broad forehead. The android, Geminoid H1-1, mimics its master's posture, lip movements and facial gestures - even his fidgety fingers and feet. It blinks, seems to breathe and sounds, for all the world, like its human counterpart. Remote-controlled Geminoid stands in when the busy professor doesn't have time to lecture and presents a compelling scripted talk. But when a member of the audience asks a searching question, Geminoid smiles blandly at the inquirer and says nothing. And when people get up to leave, the android sits glued to the chair: its legs, designed only for micro-movements, can't stand or walk. Androids - robots that resemble humans - are increasingly popular exhibits at robotics conferences and trade shows worldwide. Unlike humanoids, which have two arms and two legs but look more like machines than people, androids are appealing because they seem so much like us. Enticed by research suggesting that people relate better to robots the more they resemble humans, roboticists are developing androids that one day might assist in aged care and eventually supersede servile robotic home helps, such as automated floor-crawling vacuum cleaners. "The appearance of androids is important and we cannot ignore its effect in communication," says Professor Ishiguro, a pioneer in android science, which combines cutting-edge research in robotics with slow but sure advances in cognitive psychology. "My purpose is to understand humans by building androids... The practical use of androids is a kind of by-product." Until recently, roboticists had not taken them seriously, regarding androids largely as electronic puppets. Most of the technical effort went into humanoids, which might vacuum the house or mow the lawn, and service or assembly-line robots, which perform specific tasks very well, says Ray Jarvis, director of the Intelligent Robotics Research Centre at Monash University in Melbourne. "Androids like Geminoid and Repliee have been designed the other way around - they look the piece, but are much more limited in what they can do." Androids cannot, for example, make a cup of tea, do the housework or explore their environment, says Alan Winfield, Professor of Electronic Engineering at the University of the West of England, in Bristol. "They are not autonomous - relying on a power feed or batteries. They can't repair themselves or reproduce. They are not conscious or sentient. They can't reason about themselves or the world. I could go on - in short, current androids can't do very much at all and fall far, far short of the androids we see in the movies." This sober but realistic view is widely held. Roboticist Phillip McKerrow, from Wollongong University's School of Computer Science and Software Engineering, sums up present android capabilities thus: "They are basically expensive toys with limited dexterity and minimal intelligence." Put another way, current androids are "a bit like a copy of Microsoft Word on wheels," says Brett Feldon, the chief technology officer at Sydney-based VeCommerce, which provides voice self-service and speaker verification systems. That androids still reflect a triumph of form over substance is illustrated by another Ishiguro creation, a female android dubbed Repliee Q2, which shares many of Geminoid's drawbacks. Developed by Osaka University's Department of Adaptive Machine Systems in conjunction with a Tokyo-based animatronics maker, Kokoro, Repliee was originally modelled on the professor's daughter when she was four years old. The android has undergone several upgrades and now bears a striking resemblance to a rather fetching young Japanese woman. Its movements - orchestrated by a series of 42 air-powered actuators, including 13 in its head - are almost as free and smooth as a human's. It appears to breathe and can flutter its eyelids. Omnidirectional cameras recognise gestures, such as a raised hand, and track human facial movements. Delicate physiological sensors embedded in Repliee's pliable silicone skin detect touch and other sensations. Microphones, linked to a voice-recognition system, allow it to hear and respond to human speech. But here lies the rub: while the voice is realistic enough, its speech is little more convincing than that of a voice-activated telebanker. Like Geminoid, Repliee can reply with only a dozen or so words and, as with other voice-recognition devices, performs poorly in noisy places. The inability of androids to communicate intelligently with humans is their biggest bugbear. They may speak in limited ways on specific topics, but cannot converse widely and certainly not on abstract subjects such as philosophy. In short, they're a long way from passing the legendary Turing test, described in 1950 by British mathematician Alan Turing. It requires a human judge to converse separately with a person and a computer - not knowing which is which. If the judge cannot tell the two apart, the machine passes the test. "Today's robots wouldn't pass the Turing test because they communicate by matching patterns of speech," explains Professor Robert Dale, director of Macquarie University's Centre for Language Technology, in Sydney. All is scripted - there's no reasoning going on, and certainly no understanding, he says. This much was evident at a recent international robotics conference, ICRA 07, in Rome, where mobile robots mingled with delegates and reacted in simple ways - but showed no understanding. "They were programmed to do something in response to a sensor input," says associate Professor McKerrow, who attended the conference. "One would say 'ouch' when someone kicked its bumper. Another would say a sentence when it detected a face . . . I saw a four-year-old boy trying to talk to this robot and getting very frustrated. He was saying: 'Remember me, you talked to me a while ago."' Despite androids' limited communication skills, their physical make-up and prowess - from skin to limbs, muscles to motion - are progressing in leaps and bounds. At present, android skin, like Repliee's, comprises silicone, generally embedded with touch sensors. Their limbs are often made of aluminium. In future, limbs will more than likely consist of carbon fibre, in the way of squash and tennis racquets, suggests Professor Jarvis. Artificial muscles, another key area of research, presently tend to consist of dozens, sometimes hundreds, of electric motors with torque feedback, enabling their joints to seem compliant. Some robots are connected to compressors that power "air muscles" - rubber tubes that contract when high-pressure air is blown into them. When the air is released, the tube relaxes and lengthens again. Trials are under way into alternative novel materials, including Nitinol wire, a strong but lightweight shape-memory alloy of nickel and titanium, and electro-elastic polymers, which stretch and contract. (Auckland University successfully demonstrated the latter at last year's Australasian Robotics Conference.) Particularly encouraging is research into artificial muscles comprising sheets of carbon nanotubes - big cylindrical molecules of pure carbon with unusual electrical and mechanical properties. When an electrical voltage is applied gradually, ions inside the carbon move to one side, bending the nanotube sheets. The speed and extent to which the sheets bend depend on how fast and by how much the power is increased. When the power is switched off, the sheets unbend and return to their original shape. This allows the artificial muscles to work like humans' opposing muscle groups. Some of these materials might be used for android fingers in the future, predicts Professor Jarvis, who favours Nitinol, which bends or stretches when heated by an electric current. MIT researchers in the United States, on the other hand, are developing skin that senses, and can stop, something slipping through an android's fingers. Processors and operating systems powering androids vary widely. Some rely on conventional, personal computer-style pentium processors of one gigahertz and upwards, although a few state-of-the-art models have customised processors of the kind found in calculators and specialised appliances. Most run on real-time operating systems such as RT-Linux. A few androids communicate wirelessly through bluetooth with a central computer. Many work on mains power. But a few are autonomous. Britain's Bristol Robotics Laboratory, for example, runs some of its prototype robots on microbial fuel cells, which convert chemical energy into electricity. Although the devices are not androids, and move very slowly, they extract power from biomass - such as rotting fruit and other foods. Not to be outdone by the British, Austria's Humanoid Robotic Laboratory in Linz has built a "Barbot", which buys beer at the bar and drinks it. The plan is to convert the beer into power to run the robot. How might tomorrow's androids function? In years to come, roboticists expect that advanced systems - incorporating neural networks, genetic algorithms and fuzzy logic - will run on an assortment of very small, very fast processors, each performing a specific task in parallel and communicating simultaneously over lightning-fast networks. One or more master processors will probably synchronise and co-ordinate the maze of brain and body functions. Mr Feldon submits that android brains might one day rely on quantum computing power. "There are proposals for quantum computing based on silicon, as well as other physical systems - for example trapped ions, superconductors or molecules," says theoretical physicist Dr Ron Burman, of the University of Western Australia in Perth. Professor McKerrow expects that what a robot will be built to do "will be determined by three factors: technological capability, task requirements and cost". Aside from such practical considerations, might androids, in the dim distant future, become inseparable from human beings? Are conscious androids feasible? The December 2006 issue of the journal Cognitive Processing reported that French researchers, led by Dr Alain Cardon, had proposed to develop software producing a level of artificial consciousness. Is this pie in the sky? Or could silicon some day supplant carbon-based life forms, such as us? In part, this depends on whether one takes the view of American philosopher Daniel Dennett, who believes that humans are immensely complex and sophisticated computational machines. For him, brute force computing power might eventually mimic the human mind. "Dennett says some eminently sensible things about consciousness and some silly things," says internationally renowned physicist Paul Davies, now based at Arizona State University. "Regarding consciousness as nothing but a vast amount of digital information processing seems to me to be a completely unjustified claim." Professor Davies feels that "the fixation with the digital computer as a model of human consciousness," as he puts it, "is holding up our understanding of the subject". The comparison, he says, "is akin to studying cartoon characters to obtain insights into biology. As the philosopher John Searle asks of consciousness: 'How does the brain do it?' The truth is, we haven't a clue. But however it does it, I am sure it involves more than merely flipping lots of informational bits very rapidly, which is what proponents of the digital computer route to artificial intelligence suggest." Mr Feldon agrees: "In my view, we're as far away from artificial intelligence now as we thought we were in the '60s." Professor Catherine Pelachaud, a computer scientist at France's University of Paris, is even more sceptical. "Humans are still more intelligent - they have feeling, they have consciousness. Robots are far from fulfilling these qualities. I do not believe they will have them one day. Moreover, I am not sure this would be a good idea," she says. A promising area of research, already under way at some labs, involves equipping robots with mood-detection software and giving them rudimentary forms of social and emotional intelligence. This might help them match a person's emotional state. Endowing them with emotions may be essential if androids are ever to become sentient, artificially intelligent machines (in which case they may no longer be classifiable as machines). As Mr Feldon reminds us: "The neuroscientist Antonio Damasio argues that, in humans, reason and emotion are inextricably linked. Maybe that's going to be true for artificial intelligence, too." Research is also intensifying into ways to allow robots to learn for themselves. Some believe this will be necessary to pass the Turing test. All of this leads to an intriguing question. Might an android, which has acquired the ability to learn for itself, and been seeded with basic emotions, one day surprise its builders by taking on unexpected qualities and behaviours - such as some human virtues and vices? "It is certainly possible for a complex system to exhibit emergent behaviour - that is, to display new and unexpected properties, which appear abruptly above a certain threshold of complexity," says Professor Davies. "What is less obvious is that such behaviour would be remotely human-like. Human nature is not a result of the spontaneous emergence of novelty, but rather the product of a long period of evolutionary selection and adaptation, so it is fashioned to a great extent by the environment." Given long enough, however, it is conceivable that androids might one day be made to think. If that happened, and it is admittedly a big if, they could perhaps evolve in ways that today's roboticists hadn't imagined - even in their wildest droid dreams. BREAKING FREE Isaac Asimov, the Russian-born writer and biochemist, proposed three rather rigid robotic rules, or laws. The first states that robots should not harm humans or allow them to be harmed; the second that robots should always obey humans (unless that infringes law one) and the third law says robots should always protect themselves (unless that infringes laws one or two). Shuji Hashimoto, who directs the humanoid robotics centre at Tokyo's Waseda University, feels that Asimov's laws could inhibit the potential of androids for self-development. A robot's intelligence, he believes, should grow as it ages, learning through trial and error - in the way that a child matures. In September 2006, New Scientist reported Dr Hashimoto as saying that robots, at present, were "merely play-acting and faking feelings . . . they are not machines with a heart; they just look like they have a heart". As long as robots obey Asimov's laws, "we will never have a machine that is a true partner for a human", he told the magazine. For him, "humans should not stand at the centre of everything. We need to establish a new relationship between human and machine."

Fausto Intilla

Robot walks on water

Source:

Researchers Yun Seong Song, a PhD student in mechanical engineering, and Metin Sitti, assistant professor in mechanical engineering, both from Carnegie Mellon University, have recently built a robot that mimics the water strider’s natural abilities. The first water striding robot, with an appearance and design closely resembling its insect counterpart, doesn’t ever break the surface tension of the water, and is highly maneuverable.
Song and Sitti’s small robot is different from other floating robots in that its small mass and long legs enable it to utilize the surface tension force to stay afloat. In contrast, macroscale bodies must rely on buoyancy, which is based on their large volumes. The researchers predict that such a robot might be used for environmental monitoring via wireless communication, as well as for educational and entertainment purposes. “Water strider robots—we call them STRIDEs (Surface Tension Robotic Insect Dynamic Explorer)—can walk on water only 3-4 mm deep (shallow water),” Sitti explained to PhysOrg.com. “Their power efficiency and agility (speed and maneuverability) are much superior for relatively small water vehicles since the STRIDE legs have much less drag than any buoyancy based robot.” Of course, as Sitti explained, this advantage is eliminated when the STRIDE becomes meter-scale, since surface tension force scaling is not favorable at large scales. From models and calculations, the researchers found that an optimal robot would have hydrophobic wire legs coated with Teflon, each 5 cm long. Twelve of these legs attached to the 1-gram body of the robot could support a payload of up to 9.3 grams in their experiments. The key to avoiding the breaking of the water surface is in maintaining a water-air interface that is more horizontal than vertical. For locomotion, the water strider insect creates a sculling motion with specialized sculling legs. The robot functions the same way. Three piezoelectric actuators, when attached to the legs in a T shape, create both vertical and horizontal motion to cause the elliptical sculling motion required to move. Because the piezoelectric actuators provided only a small deflection, an amplifier was needed to create large strokes. To achieve this, the researchers used a resonant frequency with a vibration mode favorable to generating the sculling motion to drive the actuators. While a water strider insect can move at speeds of up to 1.5 m/s, the first robot still achieved a forward speed of 3 cm/s, and could also turn, rotate and move backwards. Currently, rough water may present a hazard to the robot striders. However, Sitti hoped that by improving the lift capability and water sealing of the current STRIDE, the robot would be able to withstand waves and storms. Song and Sitti are working on other ways to improve the robot in future prototypes. For example, the T-shape actuator mechanism accounts for more than half the weight of the robot, which could possibly be hindering the robot’s speed. In a more recent prototype, for instance, the researchers designed a STRIDE that used two battery-powered micromotors to walk on water. Although this set-up had an increased mass of 6 grams, the robot also achieved an increased speed of 8.7 cm/s. The group plans to continue experimenting with different options. “STRIDE is 10-15 times slower than the insect, since the current prototype is almost 10 times larger than the insect,” Sitti explained. “Therefore, we would miniaturize the STRIDE more. Moreover, we are integrating wireless communication, sensors, and teleoperated and autonomous control capability to the new STRIDE prototypes. Thus, we could deploy tens or hundreds of these robots to the water surface for environment monitoring.” Citation: Song, Yun Seong, and Sitti, Metin. “Surface-Tension-Driven Biologically Inspired Water Strider Robots: Theory and Experiments.” IEEE Transactions on Robotics, Vol. 23, No. 3, June 2007. Copyright 2007 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com.

Fausto Intilla
www.oloscience.com

Engineers Announce Plastic, Air- And Light-driven Device More Precise Than Human Hand


Source:

Science Daily — Engineers at the Johns Hopkins Urology Robotics Lab report the invention of a motor without metal or electricity that can safely power remote-controlled robotic medical devices used for cancer biopsies and therapies guided by magnetic resonance imaging. The motor that drives the devices can be so precisely controlled by computer that movements are steadier and more precise than a human hand.
"Lots of biopsies on organs such as the prostate are currently performed blind because the tumors are typically invisible to the imaging tools commonly used," says Dan Stoianovici, Ph.D., an associate professor of urology at Johns Hopkins and director of the robotics lab. "Our new MRI-safe motor and robot can target the tumors. This should increase accuracy in locating and collecting tissue samples, reduce diagnostic errors and also improve therapy."A description of the new motor, made entirely out of plastics, ceramics and rubber, and driven by light and air, was published in the February issue of the IEEE/ASME Transactions on Mechanotronics. The challenge for his engineering team was to overcome MRI's dependence on strong magnetic interference. Metals are unsafe in MRIs because the machine relies on a strong magnet, and electric currents distort MR images, says Stoianovici. The team used six of the motors to power the first-ever MRI-compatible robot to access the prostate gland. The robot currently is undergoing preclinical testing. "Prostate cancer is tricky because it only can be seen under MRI, and in early stages it can be quite small and easy to miss," says Stoianovici. The new Johns Hopkins motor, dubbed PneuStep, consists of three pistons connected to a series of gears. The gears are turned by air flow, which is in turn controlled by a computer located in a room adjacent to the MRI machine. "We're able to achieve precise and smooth motion of the motor as fine as 50 micrometers, finer than a human hair," says Stoianovici.The robot goes alongside the patient in the MRI scanner and is controlled remotely by observing the images on the MR. The motor is rigged with fiber optics, which feeds information back to the computer in real time, allowing for both guidance and readjustment. "The robot moves slowly but precisely, and our experiments show that the needle always comes within a millimeter of the target," says Stoianovici. This type of precision control will allow physicians to use instruments in ways that currently are not possible, he says. "This remarkable robot has a lot of promise - the wave of the future is image-guided surgery to better target, diagnose and treat cancers with minimally invasive techniques," says Li-Ming Su, M.D., an associate professor of urology and director of laparoscopic and robotic urologic surgery at the Brady Urological Institute at Hopkins. The research was funded by the National Institutes of Health, the Prostate Cancer Foundation, and a grant from the Johns Hopkins Medicine Alliance for Science and Technology Development Industry Committee. Current experiments with the robot are supported by the Patrick C. Walsh Foundation.Authors on the paper are Stoianovici, Alexandru Patriciu, Doru Petrisor, Dumitru Mazilu, and Louis Kavoussi, all of Hopkins.
Note: This story has been adapted from a news release issued by Johns Hopkins Medical Institutions.

Fausto Intilla