Sunday, April 27, 2008

Next Step In Robot Development Is Child's Play


Source:
ScienceDaily (Apr. 26, 2008) — Teaching robots to understand enough about the real world to allow them act independently has proved to be much more difficult than first thought.
The team behind the iCub robot believes it, like children, will learn best from its own experiences.
The technologies developed on the iCub platform – such as grasping, locomotion, interaction, and even language-action association – are of great relevance to further advances in the field of industrial service robotics.
The EU-funded RobotCub project, which designed the iCub, will send one each to six European research labs. Each of the labs proposed winning projects to help train the robots to learn about their surroundings – just as a child would.
The six projects include one from Imperial College London that will explore how ‘mirror neurons’ found in the human brain can be translated into a digital application. ‘Mirror neurons’, discovered in the early 1990s, trigger memories of previous experiences when humans are trying to understand the physical actions of others. A separate team at UPF Barcelona will also work on iCub’s ‘cognitive architecture’.
At the same time, a team headquartered at UPMC in Paris will explore the dynamics needed to achieve full body control for iCub. Meanwhile, researchers at TUM Munich will work on the development of iCub’s manipulation skills. A project team from the University of Lyons will explore internal simulation techniques – something our brains do when planning actions or trying to understand the actions of others.
Over in Turkey, a team based at METU in Ankara will focus almost exclusively on language acquisition and the iCub’s ability to link objects with verbal utterances.
“The six winners had to show they could really use and maintain the robot, and secondly the project had to exploit the capabilities of the robot,” says Giorgio Metta. “Looking at the proposals from the winners, it was clear that if we gave them a robot we would get something in return.”
The iCub robots are about the size of three-year-old children, with highly dexterous hands and fully articulated heads and eyes. They have hearing and touch capabilities and are designed to be able to crawl on all fours and to sit up.
Humans develop their abilities to understand and interact with the world around them through their experiences. As small children, we learn by doing and we understand the actions of others by comparing their actions to our previous experience.
The developers of iCub want to develop their robots’ cognitive capabilities by mimicking that process. Researchers from the EU-funded Robotcub project designed the iCub’s hardware and software using a modular system. The design increases the efficiency of the robot, and also allows researcher to more easily update individual components. The modular design also allows large numbers of researchers to work independently on separate aspects of the robot.
iCub’s software coding, along with technical drawings, are free to anyone who wishes to download and use them.
“We really like the idea of being open as it is a way to build a community of many people working towards a common objective,” says Giorgio Metta, one of the developers of iCub. “We need a critical mass working on these types of problems. If you get 50 researchers, they can really layer knowledge and build a more complex system. Joining forces really makes economic sense for the European Commission that is funding these projects and it makes scientific sense.”
Built-in learning skills
While the iCub’s hardware and mechanical parts are not expected to change much over the next 18 months, researchers expect to develop the software further. To enable iCub to learn by doing, the Robotcub research team is trying to pre-fit it with certain innate skills.
These include the ability to track objects visually or by the sounds – with some element of prediction of where the tracked object will move to next. iCub should also be able to navigate based on landmarks and a sense of its own position.
But the first and key skill iCub needs for learning by doing is an ability to reach towards a fixed point. By October this year, the iCub developers plan to develop the robot so it is able to analyse the information it receives via its vision and feel ‘senses’. The robot will then be able to use this information to perform at least some crude grasping behaviour – reaching outwards and closing its fingers around an object.
“Grasping is the first step in developing cognition as it is required to learn how to use tools and to understand that if you interact with an object it has consequences,” says Giorgio Metta. “From there the robot can develop more complex behaviours as it learns that particular objects are best manipulated in certain ways.”
Once the assembly of the six robots for the research projects is completed, the developers plan to build more iCubs, creating between 15 and 20 in use around Europe.
Adapted from materials provided by ICT Results.
Fausto Intilla - www.oloscience.com

New Prosthetic Hand Has Grip Function Almost Like A Natural Hand: Each Finger Moves Separately


Source:
ScienceDaily (Apr. 25, 2008) — It can hold a credit card, use a keyboard with the index finger, and lift a bag weighing up to 20 kg – the world’s first commercially available prosthetic hand that can move each finger separately and has an astounding range of grip configurations. For the first time worldwide a patient at the Orthopedic University Hospital in Heidelberg has tested both the “i-LIMB” hand in comparison with another innovative prosthesis, the so called ”Fluidhand”. Eighteen-year-old Sören Wolf, who was born with only one hand, is enthusiastic about its capabilities.
The new prosthetic hand developed and distributed by the Scottish company “Touch Bionics” certainly has advantages over previous models. For example, a comparable standard product from another manufacturer allows only a pinch grip using thumb, index, and middle finger, and not a grip using all five fingers. This does not allow a full-wrap grip of an object.
Myoelectric signals from the stump of the arm control the prosthesis
Complex electronics and five motors contained in the fingers enable every digit of the i-LIMB to be powered individually. A passive positioning of the thumb enables various grip configurations to be activated. The myoelectric signals from the stump control the prosthetic hand; muscle signals are picked up by electrodes on the skin and transferred to the control electronics in the prosthetic hand. Batteries provide the necessary power.
The “Fluidhand” from Karlsruhe, thus far developed only as a prototype that is also being tested in the Orthopedic University Hospital in Heidelberg, is based on a somewhat different principle. Unlike its predecessors, the new hand can close around objects, even those with irregular surfaces. A large contact surface and soft, passive form elements greatly reduce the gripping power required to hold onto such an object. The hand also feels softer, more elastic, and more natural than conventional hard prosthetic devices.
"Fluidhand" prosthetic device offers better finishing and better grip function
The flexible drives are located directly in the movable finger joints and operate on the biological principle of the spider leg – to flex the joints, elastic chambers are pumped up by miniature hydraulics. In this way, index finger, middle finger and thumb can be moved independently. The prosthetic hand gives the stump feedback, enabling the amputee to sense the strength of the grip.
Thus far, Sören has been the only patient in Heidelberg who has tested both models. “This experience is very important for us,” says Simon Steffen, Director of the Department of Upper Extremities at the Orthopedic University Hospital in Heidelberg. The two new models were the best of those tested, with a slight advantage for Fluidhand because of its better finishing, the programmed grip configurations, power feedback, and the more easily adjustable controls. However, this prosthetic device is not in serial production. “First the developers have to find a company to produce it,” says Alfons Fuchs, Director of Orthopedics Engineering at the Orthopedic University Hospital in Heidelberg, as the costs of manufacturing it are comparatively high. However it is possible to produce an individual model. Thus far, only one patient in the world has received a Fluidhand for every-day use. A second patient will soon be fitted with this innovative prosthesis in Heidelberg.
Adapted from materials provided by University Hospital Heidelberg.
Fausto Intilla - www.oloscience.com

Thursday, April 24, 2008

INTERVIEW WITH DAVID HANSON






[As CEO of Hanson Robotics, Inc, David Hanson creates robot faces that have been dubbed "among the most advanced in the world" by the BBC, and inspired Science to label Hanson "head of his class" in social robotics. After receiving a BFA from the Rhode Island School of Design, and dabbling in AI programming at Brown, David Hanson worked at Walt Disney Imagineering, leading development of an autonomous walking robot and electro-active polymer (EAP) actuators. Later Hanson went on to work toward a PhD at the University of TX at Dallas, developing social robots affect naturalistic conversations with face tracking AI, speech recognition, and realistic expressions that use Hanson's patent-pending polymer materials.]

UBIQUITY: Let's start by discussing some of the work you and Mihai Nadin have been doing. [Readers who missed our interview with Professor Mihai Nadin may want to read it in the Ubiquity archives.] What's the focus of your research?
HANSON: We're looking at how robots can be used to increase in the anticipation skills of the elderly. The idea is to create small characters that engage people in a very personal way, since people respond naturally to conversational characters -- be it ones they encounter in a work of literature, in a screen animation, interactively in the form of a person.
UBIQUITY: Give us a little lesson. Your specialty now is social robots. What other kinds of robotics are there?
HANSON: I built a small walking robot back in college at RISD taking classes at Brown. I was taking a couple of computer science special topics classes, and I was looking at using robotics at the AI to control themed environments and for things such as looking at how to induce playful states in people's minds. So my focus in on the concept of what I call robotic party architectures, where the idea to create a space that's really playful and surprising and then invite people to it under the premise that it's a party. But then it's more like a sort of psychoactive themed environment that takes you on a voyage of sorts. And I regard that kind of voyage as a recurring effect of art. Good art often seems to transport you to someplace new or to a new state of mind, expose you to new ideas that arose during the course of the making of the art, or to convey ideas or they inspire new ideas in the viewer. So the viewer winds up having these playful changes of mind.
UBIQUITY: So what would your definition of social robotics be and how does it differ from other kinds of robotics?
HANSON: Social robotics is comprised of robots meant to engage people socially. The idea is that you have a set of normal mechanisms for engaging with other people socially. We've evolved to that way. And if you can create an artificial entity that utilizes those neuro-mechanisms, then it becomes very natural for people to interface with a social robot. Those neural mechanisms function to extend our individual intelligence into a social intelligence; in other words, we're smarter when we organize them into social groups, like families and friendships and corporations and government and schools. These are all social institutions that are built on our natural tendency to socialize by these very primal neural mechanisms. The idea is that by tapping into these tendencies, we are smarter when we interface with our machines.
UBIQUITY: How and why does that happen?
HANSON: Our machines become smarter when they interface with us because in order for them to use our natural social interfacing skills, they have to become smarter. They act socially smarter. They begin to mirror our neural mechanisms, not necessarily exactly, but in some kind of functional way. We have wet neurons that are making the robot smile at the appropriate moment, and if it smiles at the appropriate moment then you're starting to approximate social mechanisms.
UBIQUITY: You've done some amazing work, and we need to send people to your Web site. What would be the thing that you would start showing people from your Web site? What should they look at first?
HANSON: I would encourage them to look first at the Hanson Robotics site to look at the Einstein video to see how good the hardware is, how lightweight and low-power it is. And then after . the Einstein robot the video of the Philip K. Dick robot, which shows the conversational capabilities of our robots, the ability of the robots to capture and hold people's attention in a conversation. There are so many uses for social robots. One is entertainment. I mean, we're fascinated, we gravitate towards the human face and to human-like characters. The Philip K. Dick robot is a machine that literally holds a conversation with you, looks you in the eye and has an open-ended conversation, where you can talk about just about anything.
UBIQUITY: How do you do that? What's the trick?
HANSON: Well, there are a bunch of tricks. One of the weird things about a robot is that you can't really reduce it to one thing, because it's a lot of things coming together. It's an integrated system, so you have to have speech recognition, some natural language processing, some speech synthesis. You have to have computer vision that can see human faces and you have to have a sophisticated enough motion control system that you can direct the robot's eyes to the coordinate position where you've detected a face. And you have to remember where the last face was that you were looking at, you have to be able to look back where the face was. Then you start to hit on the rudiments of a world model, that three-dimensional world model that's populated with some kind of representation of objects, people, and places.
UBIQUITY: How do you approach the problem?
HANSON: We're starting to sketch the requirements for the base conversational systems. Earlier we did this with less deep conversation. We did it with a robot called Hertz and a robot named Eva, and then finally we got to the Philip K. Dick robot. But the main difference is the depth of his conversational database, and to achieve that we did two things. First, we scanned a little over 10,000 pages of the writing of Philip K. Dick, and then developed a statistical search database similar to what you might have when you go to a search engine like Google and you type in a query. But ours is tied into natural language processes, so it has a special way of interpreting your question by parsing it into a search going to the database, semantically searching and then assembling the search results into a sentence.
UBIQUITY: Did you use off-the-shelf software for that or did you have to develop it all yourselves?
HANSON: We're using mostly off-the-shelf software. The speech recognition was provided by Multi Modal. The search software was provided by a company out in Colorado, using a technique called latent semantic analysis. The roles engine that we were using is Jess, which has an open source version, but it's based on Java. The natural language processing system was developed by Andrew Olney, a Ph.D. student at the University of Memphis, who actually swept through our previous code and essentially rewrote all the language stuff from the ground up.
UBIQUITY: Where do you see your research going?
HANSON: I'm interested in making these robots easily custom-designed and mass producible -- in other words, easily designed using low-cost hardware, so that very inexpensive facial expressions can go with inexpensive walking robot bodies, as well as easily customized software. Therefore, we will be improving the software, improving the quality and rate of the speech recognition. The ability to design a custom personality and animation for the robots and to tweak and tune those things needs to get better. I see these as practical tools for bringing social robots into our lives, be they human-like or cartoon-like. These tools will be useful for artificial intelligence development. In an essay a couple of years ago AI pioneer Marvin Minsky lamented the fact that the graduate students and the AI lab at MIT had spent most of their time soldering instead of developing artificial intelligence.
UBIQUITY: Did he propose anything to fix that problem?
HANSON: His solution was to turn to simulation, but the problem with simulation as far as characters are concerned -- or as far as social agents are concerned -- is that you lack that sense of "presence," that sense of immediacy that you gain when you have a 3D robot. Also, a virtual world doesn't have all the physics of a real world, so you're not simulating the noisiness of the environment. Of course, we say "noisiness," but it's really information -- just not information that we understand necessarily. If we lay out these tools for software and for hardware, rapid design and low cost deployment, then we get rid of those obstacles to AI development. If we can provide these software tools that are easily extensible, modular, and replaceable, but which provide a foundation for AI character development, then it means that the research community can focus on other problems.
UBIQUITY: In what way?
HANSON: The research community can turn into an extended development network so that the parts and pieces will play well together. The human-like interfaces and the interactive software systems can operate with an animated agent. This way, researchers can focus on hard problems in a subdomain, like speech recognition or general intelligence, but investigate how these components operate in the real world, in face-to-face conversations with people, while walking around the world. Solving the hard problems of AI may not always require a virtual agent or a physical robot body, but I see the physical robot bodies being very useful in research, and certainly useful in the marketplace. For example, I see animated robotic characters -- think Disney characters or Universal Studios characters -- that live in the house and actually walk around. They look up at you, they lock eyes with you, they converse with you, they can teach your kids, they can provide cognitive stimulation for the elderly, they can download wirelessly the latest news and gossip. They can be an extension of the world of entertainment, and can also be a natural language search tool, so that if you have a question you ask the robot. The robot sends the question off to the search API, and turn it into a natural language interface for the robot. (That hasn't been done yet, but it can be done.) And then the robot returns the answer to your query in natural language.
UBIQUITY: How good is this natural language?
HANSON: Well, that's one area that requires more development. For precise uses of the robot -- for example, if you wanted to know the capital of the country -- you would need the speech recognition to be spot on. Unfortunately, recognition with a large vocabulary is about 50 to 80% accurate, and that's the good news -- but remember that if it's misunderstanding 20% of the words in a sentence, that's the bad news, because then the whole meaning of the sentence can be lost if one in five words are misunderstood. So speech recognition is one stumbling block right now, and another stumbling block is coming up with a good interaction design that solves some of the problems with speech recognition. Even with the speech recognition of human beings, we sometimes don't understand each other. We have to have little dialogue techniques for asking what the other one was meaning or what the other one said. If you don't understand something, then you pipe up and say something, so you can begin that disambiguation routine.
UBIQUITY: On the question of interaction, can the robots have social interaction with each other?
HANSON: Yes, absolutely. The artist Ken Feingold created a work with two chatterboxes side by side in a box, for the Whitney Biennial in 2002. Two talking heads just spit responses back and forth to one another. Of course, chatterboxes are very shallow natural language interaction devices, and all their responses are pre-programmed, and they don't do any semantic evaluations. It's usually a one-for-one type response, so it's predictable and shallow; however, they are still very interesting. The conversations you can have with a chatterbox can be very stimulating. Feingold built two identical robots and then fed the text from one chatterbox into the other chatterbox, so they were in this infinite loop of chatting with one another. And the results are interesting. But as we build social interactive robots with deeper minds, deeper natural language capabilities, and more open-ended capabilities, then the interaction between the robots or artificial minds will get more rich and interesting. I believe that that human intelligence is substantially social intelligence: we work well socially and we solve problems together that we can't solve alone, and I believe that robots and artificial minds will work in concert in a similar way. If we're emulating our own social intelligence, then the machines will get smarter in kind. They'll be smarter as a community of robots.
UBIQUITY: How many people and how many people hours do you think were devoted to -- just to pick one project -- let's say the Philip K. Dick project? What went into that?
HANSON: We put in probably 200-300 hours on software development in Dallas, plus the foundation of the software development took thousands of hours, and the work on the robots prior to the Philip K. Dick robot. So you've got all that heritage development work. But then from the point where we hit Go on the PKD robot, when I started sculpting it, I personally put in probably about 800 hours. We had a CAD designer working on it who put in maybe 120 hours, and he was taking all the parts and pieces that were made by hand in previous years and turning them into CAD models that could be produced on a prototyping machine. We had about four other people. I think each one of them probably put in 200-300 hours. Andrew Olney, as I understand it, put in a couple of hundred hours on it.
UBIQUITY: Let's turn from software to hardware -- specifically, skinware. Is there something called skinware?
HANSON: Sure, sure. Well, that would be part of the hardware, but yes, absolutely.
UBIQUITY: It's really impressive how lifelike you've been able to make the skin of the robots.
HANSON: Oh, thank you, yes. The facial expression robots like the ones that you might see in Japan or in animatronics for theme parks and motion pictures used the best material available, which was rubber material, an elastic polymer really. Rubber refers to like latex rubber, so usually they say elastomer, or elastic polymer, and the physics of solid elastomers doesn't behave like the physics of facial soft tissue. Facial soft tissue is a cellular elastic material mostly filled with liquid probably, 85% water. It winds up taking considerably more energy to achieve compressive displacement in the material than to elongate, to stretch it. So the key in my mind was then to achieve some kind of cellular grid or matrix in the material, but the conventional sponge materials have stress concentration points, which just means that where the cell walls meet, they get thicker, the material gets thicker and it stretches less.
UBIQUITY: What's the consequence of that?
HANSON: The most important is that the material wrinkles, because wherever you have a wrinkle on the face, let's say that crease around the corner of a mouth when you smile, that is an area where the material has to compress. In order for it to fold, it has to compress right there along that line, and conventional elastomers just won't do it. It's very difficult to get the kind of wrinkling and creases that are natural; even in child faces you have those creases, you know? It's not just older faces that have those hallmarks of human expressions, so by making the material foam, we're able to get those good expressions and we're able to achieve the expressions with substantially lower energy.
UBIQUITY: Look back on the history of artificial intelligence and social robotics and help us see it as a unified history. You remember Eliza, right? Start from Eliza, and tell us what's happened since then.
HANSON: Well, that was the first chatterbox and it functioned very, very well in a simple way. But, as I said, the intelligence that you can embody in a chatterbox is going to be shallow. There is no world model, no semantic analysis of language. There was a standing bias against anthropomorphisms that was kind of inherited from the scientific wariness of the anthropomorphic bias, where you wanted to not project the human mind on animals or on anything in nature. So the idea was when you make robots, you don't want to project your human biases on these things, you want the intelligence to sort of exist independently of human origins. This model was reflected in the movie 2001, with the Hal character. So Hal doesn't have a face; it has only a voice and just this one eye. It's extremely creepy, to be honest. But the development of the Hal character in the novel and the movie was based on consultation with leaders in the field of AI. And the kind of robots that you might see in movies like Westworld or in Blade Runner was not taken seriously in the world of robotics.
UBIQUITY: When did this approach change?
HANSON: The idea started changing in the '80s and '90s when you started having this concept of a mode of computing or affective computing. The idea was that computers could benefit from having emotions in two ways, first by being easier to relate to by humans, because if a computer seems emotional then it's not as cold: it makes you feel better, it makes you feel like it likes you -- and that in turn can help you like it. And then the second way is the way that Antonio Damasio describes the world of emotions in human cognitive processes in his book called "Descartes' Error." Damasio's a neuroscientist who has investigated the role of feelings and emotions in humans -- in particular, humans who have had brain damage to areas related to feelings and emotions and whose performance on IQ tests and all kinds of other tests is fine but they can't perform well in the world: they screw up all their relationships, their jobs, they can't calculate things like the odds of success; in long-term gambling scenarios in human subject tests and in the tests where they have to make decisions about long-term gains versus short-term rewards, they always choose short-term rewards. By making computers affective, we allow humans to relate to the computers better and the computers to perform in the world better. That's the premise.
UBIQUITY: Where did we go from that premise?
HANSON: With that model and with the work of Cynthia Breazeal at MIT, you had a birth of social robotics -- this idea that you can make the characters have a face that looks very humanlike in a simplistic way, with frowns, smiles and facial expressions that we can recognize. And then you go to systems where you look at the robot, the robot looks at you, you look away, the robot looks where you're looking, so that you are simulating this feeling of the robot's perception. People feel that the robot is intelligent. You can use that shared attention system then to begin to teach the robot, so the idea is that you have intelligence that emerges and is trained the way that a human infant would be trained. That kind of idea of an emerging intelligent system kind of sparked a fever of research in the mid '90s and Cynthia Breazeal has continued to have good results. Her latest robot, Leonardo, is proving capable of some pretty neat stuff -- learning names of objects spontaneously, learning simple tasks, physical tasks like flipping on switches and moving objects around in a three-dimensional space. But where that is going to lead is very encouraging. I mean, if you think of human beings, our intelligence is partially sort of prewired with our physiology, but then substantially programmed by interactions with our parents and teachers. So if you're trying to make an entity that is as smart as a human being, it would make sense that it would emerge from ground up, but I think that we can also start to go from top down.
UBIQUITY: Expand on that point.
HANSON: I mean, we have all these intelligent systems that can do all kinds of things we consider intelligent-they perceive faces, can see facial expressions, can understand natural language to some extent, that can do sophisticated searches, that can perform as expert systems during medical diagnostics, et cetera. And if we patch all those things together, we can wind up with something that is very smart, albeit not as smart as a human being. This is the top down approach. In the short term such a system may simulate the intelligence of a human well enough to perform tasks and teaching and entertainment to the point where it can be useful in our daily lives. It is not as smart as a human, but acts smart. Yet the top-down folks hope that in time, emulating the highest apex of intelligent human thought processes, will result in truly strong AI. But this contrasts starkly with the bottom-up approach which is to build robots that can't understand speech or do anything so smart in the short term, but are bug-like or babylike, under the hopes that they will evolve the high-level intelligence over time-- sort of recapitulating phylogeny of mind. I think that the two approaches will converge at some point in the future, so that you wind up having these entire levels of functionality by cobbling together these chunks of solutions, but then that continue to evolve and learn over time because of the brilliant contributions of researchers like Cynthia.
[END]
Ubiquity -- Volume 7, Issue 18 (May 9, 2006 - May 15, 2006)
http://www.acm.org/ubiquity

Fausto Intilla - www.oloscience.com

Monday, April 21, 2008

New Robots Can Provide Elder Care For Aging Baby Boomers


Source:
ScienceDaily (Apr. 21, 2008) — Baby boomers are set to retire, and robots are ready to help, providing elder care and improving the quality of life for those in need. Researchers at the University of Massachusetts Amherst have developed a robotic assistant that can dial 911 in case of emergencies, remind clients to take their medication, help with grocery shopping and allow a client to talk to loved ones and health care providers.
Concerned family members can access the unit and visit their elderly parents from any Internet connection, including navigating around the home and looking for Mom or Dad, who may not hear the ringing phone or may be in need of assistance. Doctors can perform virtual house calls, reducing the need for travel.
“For the first time, robots are safe enough and inexpensive enough to do meaningful work in a residential environment,” says computer scientist Rod Grupen, director of UMass Amherst’s Laboratory for Perceptual Robotics, who developed project ASSIST with computer scientists Allen Hanson and Edward Riseman.
The robot, called the uBOT-5, could allow elders to live independently, and provide relief for caregivers, the medical system and community services, which are expected to be severely stressed by the retirement of over 77 million Americans in the next 30 years.
There is no mistaking the uBot-5 for a person, but its design was inspired by human anatomy. An array of sensors acts as the robots eyes and ears, allowing it to recognize human activities, such as walking or sitting. It can also recognize an abnormal visual event, such as a fall, and notify a remote medical caregiver. Through an interface, the remote service provider may ask the client to speak, smile or raise both arms, movements that the robot can demonstrate. If the person is unresponsive, the robot can call 911, alert family and apply a digital stethoscope to a patient, conveying information to an emergency medical technician who is en route.
The system also tracks what isn’t human. If a delivery person leaves a package in a hallway, the sensor array is trained to notice when a path is blocked, and the robot can move the obstruction out of the way. It can also raise its outstretched arms, carry a load of about 2.2 pounds and has the potential to perform household tasks that require a fair amount of dexterity, including cleaning and grocery shopping.
The uBOT-5 carries a Web cam, a microphone, and a touch-sensitive LCD display that acts as an interface for communication with the outside world. “Grandma can take the robot’s hand, lead it out into the garden and have a virtual visit with a grandchild who is living on the opposite coast,” says Grupen, who notes that isolation can lead to depression in the elderly.
Grupen studied developmental neurology in his quest to create a robot that could do a variety of tasks in different environments. The uBot-5’s arm motors are analogous to the muscles and joints in our own arms, and it can push itself up to a vertical position if it falls over. It has a “spinal cord” and the equivalent of an inner ear to keep it balanced on its Segway-like wheels.
The researchers wanted to create a personal robot that could provide many services, such as a medical alert system, or the means to talk to loved ones, all in one human-like package, according to Grupen. To evaluate the effectiveness of potential technologies, the research team worked with social workers, members of the medical community and family members of those in elder care.
The collaborative effort, dubbed project ASSIST, involved researchers from the Smith College School for Social Work, the Veteran’s Administration (Connecticut Health Care System, West Haven campus) and elder care community centers in western Massachusetts. Through focus groups, the researchers learned about the preferences of potential users.
Graduate students Patrick Deegan, Emily Horrell, Shichao Ou, Sharaj Sen, Brian Thibodeau, Adam Williams and Dan Xie are also collaborators on project ASSIST.
Adapted from materials provided by University of Massachusetts Amherst.
Fausto Intilla - www.oloscience.com

3-D Images -- Cordless And Any Time


Source:
ScienceDaily (Apr. 21, 2008) — Securing evidence at the scene of a crime, measuring faces for medical applications, taking samples during production – three-dimensional images are in demand everywhere. A handy cordless device now en-ables such images to be prepared rapidly anywhere.
The car tires have left deep tracks in the muddy forest floor at the scene of the crime. The forensic experts make a plaster cast of the print, so that it can later be compared with the tire profiles of suspects’ cars. There will soon be an easier way of doing it: The police officers will only need to pick up a 3-D sensor, press a button as on a camera, and a few seconds later they will see a three-dimensional image of the tire track on their laptop computer.
The sensor is no larger than a shoebox and weighs only about a kilogram – which means it is easy to handle even on outdoor missions such as in the forest. No cable drums are needed: The sensor radios the data to the computer via WLAN, and draws its power from batteries.
The sensor was developed at the Fraunhofer Institute for Applied Optics and Precision Engineering IOF in Jena. “It consists of two cameras with a projector in the center,” says IOF head of department Dr. Gunther Notni. “The two cameras provide a three-dimensional view, rather like two eyes. The projector casts a pattern of stripes on the objects. The geometry of the measured object can be deduced from the deformation of the stripes.”
This type of stripe projection is already an established method. What is new about the measuring device named ‘Kolibri CORDLESS’ are its measuring speed, size, weight, and cordless operation. For comparison, conventional devices of this type weigh about four or five times as much and are more than twice the size, or roughly 50 centimeters long. “The reason it can be so much smaller is because of the projector, which produces light with light-emitting diodes instead of the usual halogen lamps,” says Notni. This poses an additional challenge, as the LEDs shine in all directions. To ensure that the image is nevertheless bright enough, the light has to be collected with special micro-optics in such a way that it impacts on the lens.
There are multiple applications: “Patients who snore often need a breathing mask when they sleep. To ensure that the mask is not too tight, it has to be specially made for each patient. Our system enables the doctor to scan the patient’s face in just a few seconds and have the breathing mask made to match these data,” says the researcher. Notni believes that the most important application is for quality assurance in production processes. The portable device also makes it possible to measure installed components and zones that are difficult to access, such as the position of foot pedals inside a car. The researchers will be presenting their development at the Control trade fair in Stuttgart on April 21 through 25 (Hall 1, Stand 1520).
Adapted from materials provided by Fraunhofer-Gesellschaft.
Fausto Intilla - www.oloscience.com

Friday, April 18, 2008

Graphene Used To Create World's Smallest Transistor


Source:
ScienceDaily (Apr. 18, 2008) — Researchers have used the world's thinnest material to create the world's smallest transistor, one atom thick and ten atoms wide.
Reporting their peer-reviewed findings in the journal Science, Dr Kostya Novoselov and Professor Andre Geim from The School of Physics and Astronomy at The University of Manchester show that graphene can be carved into tiny electronic circuits with individual transistors having a size not much larger than that of a molecule.
The smaller the size of their transistors the better they perform, say the Manchester researchers.
In recent decades, manufacturers have crammed more and more components onto integrated circuits. As a result, the number of transistors and the power of these circuits have roughly doubled every two years. This has become known as Moore's Law.
But the speed of cramming is now noticeably decreasing, and further miniaturisation of electronics is to experience its most fundamental challenge in the next 10 to 20 years, according to the semiconductor industry roadmap.
At the heart of the problem is the poor stability of materials if shaped in elements smaller than 10 nanometres* in size. At this spatial scale, all semiconductors -- including silicon -- oxidise, decompose and uncontrollably migrate along surfaces like water droplets on a hot plate.
Four years ago, Geim and his colleagues discovered graphene, the first known one-atom-thick material which can be viewed as a plane of atoms pulled out from graphite. Graphene has rapidly become the hottest topic in physics and materials science.
Now the Manchester team has shown that it is possible to carve out nanometre-scale transistors from a single graphene crystal. Unlike all other known materials, graphene remains highly stable and conductive even when it is cut into devices one nanometre wide.
Graphene transistors start showing advantages and good performance at sizes below 10 nanometres - the miniaturization limit at which the Silicon technology is predicted to fail.
"Previously, researchers tried to use large molecules as individual transistors to create a new kind of electronic circuits. It is like a bit of chemistry added to computer engineering", says Novoselov. "Now one can think of designer molecules acting as transistors connected into designer computer architecture on the basis of the same material (graphene), and use the same fabrication approach that is currently used by semiconductor industry".
"It is too early to promise graphene supercomputers," adds Geim. "In our work, we relied on chance when making such small transistors. Unfortunately, no existing technology allows the cutting materials with true nanometre precision. But this is exactly the same challenge that all post-silicon electronics has to face. At least we now have a material that can meet such a challenge."
"Graphene is an exciting new material with unusual properties that are promising for nanoelectronics", comments Bob Westervelt, professor at Harvard University. "The future should be very interesting".
*One nanometre is one-millionth of a millimetre and a single human hair is around 100,000 nanometres in width.
A paper entitled "Chaotic Dirac Billiard in Graphene Quantum Dots" is published in April 17 issue of Science. It is accompanied by a Perspective article entitled "Graphene Nanoelectronics" by Westervelt.
Adapted from materials provided by University of Manchester.
Fausto Intilla - www.oloscience.com

Thursday, April 17, 2008

Micro Sensor And Micro Fridge Make Cool Pair


Source:
ScienceDaily (Apr. 17, 2008) — Researchers at the National Institute of Standards and Technology (NIST) have combined two tiny but powerful NIST inventions on a single microchip, a cryogenic sensor and a microrefrigerator. The combination offers the possibility of cheaper, simpler and faster precision analysis of materials such as semiconductors and stardust.
As described in an upcoming issue of Applied Physics Letters,* the NIST team combined a transition-edge sensor (TES), a superconducting thin film that identifies X-ray signatures far more precisely than any other device, with a solid-state refrigerator based on a sandwich of a normal metal, an insulator and a superconductor. The combo chip, a square about a quarter inch on a side, achieved the first cooling of a fully functional detector (or any useful device) with a microrefrigerator.
The paper also reports the greatest temperature reduction in a separate object by microrefrigerators: a temperature drop of 110 millikelvins (mK), or about a tenth of a degree Celsius.
TES sensors are most sensitive at about 100 mK (a tenth of a degree Celsius above absolute zero). However, these ultralow temperatures are usually reached only by bulky, complex refrigerators. Because the NIST chip can provide some of its own cooling, it can be combined easily with a much simpler refrigerator that starts at room temperature and cools down to about 300 mK, says lead scientist Joel Ullom. In this setup, the chip would provide the second stage of cooling from 300mK down to the operating temperature (100 mK).
One promising application is cheaper, simpler semiconductor defect analysis using X-rays. A small company is already commercializing an earlier version of TES technology for this purpose. In another application, astronomical telescopes are increasingly using TES arrays to take pictures of the early universe at millimeter wavelengths. Use of the NIST chips would lower the temperature and increase the speed at which these images could be made, Ullom says.
The work was supported in part by the National Aeronautics and Space Administration.
* N.A. Miller, G.C. O'Neil, J.A. Beall, G.C. Hilton, K.D. Irwin, D.R. Schmidt, L.R. Vale and J.N. Ullom. High resolution X-ray transition-edge sensor cooled by tunnel junction refrigerators. Forthcoming in Applied Physics Letters.
Adapted from materials provided by National Institute of Standards and Technology.
Fausto Intilla - www.oloscience.com

Wednesday, April 16, 2008

Wireless EEG System Self-powered By Body Heat And Light


Source:

ScienceDaily (Apr. 16, 2008) — The Interuniversity Microelectronics Centre, affiliated with the Holst Centre, has developed a battery-free wireless 2-channel EEG* system powered by a hybrid power supply using body heat and ambient light which could be used to monitor brain waves after a head injury. The hybrid power supply combines a thermoelectric generator that uses the heat dissipated from a person’s temples and silicon photovoltaic cells. The entire system is wearable and integrated into a device resembling headphones.
The system can provide more than 1mW on average indoor, which is more than enough for the targeted application.
Thermoelectric generators using body heat typically show a drop in generated power when the ambient temperature is in range of the body temperature. Especially outside, the photovoltaic cells in the hybrid system counter this energy drop and ensure a continuous power generation. Moreover, they serve as part of the radiators for the thermoelectric generator, which are required to obtain high efficiency.
Compared to a previous EEG demonstrator developed within Holst Centre, which was solely powered by thermoelectric generators positioned on the forehead, the hybrid system has a reduced size and weight. Combined with full autonomous operation, no maintenance and an acceptable low heat flow from the head, it further increases the patient’s autonomy and quality of life. Potential applications are detection of imbalance between the two halves of the brain, detection of certain kinds of brain trauma and monitoring of brain activity.
The system is a tangible demonstrator of Holst Centre’s Human++ program researching healthcare, lifestyle and sport applications of body area networks. Future research targets further reduction of the power consumption of the different system components of the body area network as well as a significant reduction of the production cost by using micromachining. Interested parties can get more insight in this research or license the underlying technologies through membership of the program.
Technical details
The thermoelectric generator is composed of six thermoelectric units made up from miniature commercial thermopiles. Each of the two radiators, on left and right sides of the head, has an external area of 4×8cm² that is made of high-efficiency Si photovoltaic cells. Further, thermally conductive comb-type structures (so-called thermal shunts) have been used to eliminate the thermal barrier between the skin and the thermopiles that is caused by the person’s hair on the thermoelectric generator.
The EEG system uses IMEC’s proprietary ultra-low-power biopotential readout application-specific integrated circuit (ASIC) to extract high-quality EEG signals with micro-power consumption. A low-power digital-signal processing block encodes the extracted EEG data, which are sent to a PC via a 2.4GHz wireless radio link. The whole system consumes only 0.8mW, well below the power produced to provide full autonomy.
* electroencephalography or monitoring of brain waves
Adapted from materials provided by Interuniversity Microelectronics Centre.

Fausto Intilla - www.oloscience.com

New Artificial Material Paves Way To Improved Electronics


Source:

ScienceDaily (Apr. 16, 2008) — In the 10 April issue of Nature, a new artificial material is revealed that marks the beginning of a revolution in the development of materials for electronic applications. The discovery results from a collaboration between the theory group of Professor Philippe Ghosez (University of Liège, Belgium) and the experimental group of Professor Jean-Marc Triscone (University of Geneva, Switzerland). One of the lead researchers on this project, Matthew Dawber, who recently joined the Department of Physics and Astronomy at Stony Brook University, will be at the forefront of the continued effort to make and understand these revolutionary artificial materials in his new lab.
The new material, a superlattice, which has a multilayer structure composed of alternating atomically thin layers of two different oxides (PbTiO3 and SrTiO3), possesses properties radically different to either of the two materials by themselves. These new properties are a direct consequence of the artificially layered structure and are driven by interactions at the atomic scale at the interfaces between the layers.
“Besides the immediate applications that could be generated by this nanomaterial, this discovery opens a completely new field of investigation and the possibility of new functional materials based on a new concept: interface engineering on the atomic scale,” said Dr. Dawber.
Transition metal oxides are a class of materials that provoke great interest because of the great diversity of properties which they can present (they can be dielectrics, ferroelectrics, piezoelectrics, magnets or superconductors) and their ability to be integrated into numerous devices. The majority of these oxides possess a similar structure (referred to as ‘perovskite’) and, recently, researchers have developed the ability to build these kinds of materials up, atomic layer by atomic layer, much as a child plays with Lego bricks, hoping to produce new materials with exceptional properties.
Ferroelectrics are some of the most useful functional materials, with applications ranging from advanced non-volatile computer memories, to micro-electromechanical machines or infrared detectors. ‘Improper ferroelectricity’, is a kind of ferroelectricity that occurs only rarely in natural materials and usually the effects are far too small to be useful. The properties of improper ferroelectrics depend on temperature in a totally different way to normal ferroelectrics, meaning they would have significant advantages for many applications where the operation temperature might vary, if only the ferroelectric properties were larger in magnitude.
This new superlattice material shows improper ferroelectricity (a property that neither of the parent materials possesses) with a magnitude around 100 times greater than any naturally occurring improper ferroelectric, opening the door for a host of real world applications.
PbTiO3 and SrTiO3 are two well-known and well characterized oxide materials, presenting, in one case, a ferroelectric structural instability, and, in the other, a non-polar structural instability. A theoretical study carried out in Liège (using sophisticated first principles quantum mechanical simulation techniques, referred to as ab initio) predicted that when these materials are combined in a superlattice, an unusual and completely unexpected coupling between the two types of instabilities occur which is what causes the improper ferroelectricity.
A parallel experimental study in Geneva, confirmed the improper ferroelectric character in this type of superlattice, and also provided evidence of an exceptionally useful new property: a dielectric constant (a value which describes the response of the material to an electric field) which is, at the same time, very high and independent of temperature, two characteristics that tend to be exclusive of one another but are here combined in the same material.
But indeed the ideas generated by this discovery are much more significant than the immediate applications; this study demonstrates the possibility of creating radically different materials by engineering on the atomic scale and the PbTiO3/SrTiO3 superlattice system is only a first example. The concept of coupling of instabilities at the interfaces in artificially layered structures is a concept transferable to other types of oxides, and could be a particularly interesting strategy in the emerging domain of multiferroic oxides. These results follow hot on the heels of the discovery last year that the interface between a different pair of oxide materials was in fact superconducting , where neither of the natural materials from which it was made had this property.
This and other recent progress lead the journal Science to class the recent discoveries in oxide multilayers as one of the ten most significant scientific breakthroughs of 2007. In the same way that the mastery of the interface properties of semiconductors was crucial for the development of the modern electronics we depend on today, it seems that engineering of new properties at interfaces between oxides could result in an equally significant technological revolution in the years to come.
Journal reference and funding: This research results from a collaboration, which has been funded by the Volkswagen Foundation (Nanosized Ferroelectric Hybrids), the Swiss National Science Foundation (through the National Centre of Competence in Research-MaNEP) and the European Community (FAME-EMMI and MaCoMuFi). Eric Bousquet (ULg), Matthew Dawber (SBU/UniGe), Nicolas Stucki (UniGe), Céline Lichtensteiger (UniGe), Patrick Hermet (ULg), Stefano Gariglio (UniGe), Jean-Marc Triscone (UniGe) & Philippe Ghosez (ULg). Nature 10 April 2008 ; 452 (7188) 732-736.
Earlier Science article: N. Reyren et al., Science 317, 1196 (2007).
Adapted from materials provided by Stony Brook University.

Fausto Intilla
www.oloscience.com

Tuesday, April 15, 2008

Feeling Machines: Engineers Develop Systems For Recognizing Emotion


Source:
ScienceDaily (Apr. 15, 2008) — Emotions are an intrinsic part of communications. But machines don’t have, perceive or react to them, which makes us – their handlers – hot under the collar. But thanks to building blocks developed by European researchers, machines that ‘feel’ may no longer be confined to science fiction.
Nearly everybody has to communicate with machines at some level, be it mobile phones, personal computers or annoying, automated customer support ‘solutions’. But the communication is on the machine’s terms, not the person’s.
The problem is easy enough to identify: prodigious increases in processing power, giving machines greater capacities and capabilities, have not been matched by a similar leap forward in interface technology.
Although researchers around the world have been working on making the human-machine interface more user friendly, most of the progress has been on the purely mechanical side.
The Humaine project has come at the problem from a quite different angle to earlier, unsatisfactory attempts. It has brought together specialists and scholars from very different disciplines to create the building blocks or tools needed to give machines so-called ‘soft’ skills.
Engineers struggling with emotions
Professor Roddy Cowie, coordinator of the EU-funded project, says the issue was confused by everyone trying to do the whole thing at once when nobody had the tools to do so.
Commonly, systems would be developed by skilled programmers and engineers who understood how to write and record great computer programs, but know little about defining and capturing human emotion.
“When they developed databases, the recordings were nothing like the way emotion appears in everyday action and interaction, and the codes they used to describe the recording would not fit the things that happen in everyday life,” explains Cowie.
So Humaine went right back to the beginning and set up teams from disciplines as different as philosophy, psychology and computer animation.
The psychologists studied and interpreted the signals people give out, signifying different emotional states from boredom through to rage. Part of this is simply what is being said, but there is also the tone in which it is being said, the expression on the face, and smaller signals like eye gaze, hand gestures and posture.
Put all of these together and it is then possible for the psychologists and IT professionals to work together on a database which allows the interpretation of, and reaction to, emotion.
“Then the people who know about communications feed information to people whose job it is to get computers to generate sophisticated images,” says Cowie.
This is a simplistic explanation of a highly complex project which might not come to full fruition for another 20 or 30 years, although there are already concrete results and applications of some of the technological threads the project has come up with.
“We’ve developed systems for recognising emotion using multiple modalities and this puts us very much at the leading edge of recognition technology,” says Cowie. “And we’ve identified the different types of signal which need to be given by an agent – normally a screen representation of a person – if it is going to react in an emotionally convincing way.”
Some of these technologies are close to commercial application, he tells ICT Results.
Nice but dumb avatar
In trials in Scotland and Israel, museum guides, in the form of handheld PDAs with earpieces and microphones, monitor visitors’ levels of interest in different types of display and react accordingly. “While this is still at a basic level, it is a big step up from a simple recorded message,” Cowie points out.
At another museum in Germany, a large avatar called Max spices up the presentation by interacting with children. “Max is not very deep, but he is very entertaining, and he engages the kids,” according to Cowie.
Designers have also used the techniques to monitor the emotions of people playing video games and improve the design accordingly. Possible applications include learner-centred teaching, where students’ interest levels can be monitored and responded to, and more user-friendly manuals for, say, installing computer software.
“People automatically assume the work is aimed towards full interaction between humans and machines, rather like HAL from 2001: A Space Odyssey,” says Cowie. “That may never happen. Humaine’s philosophers have thought through carefully whether we should allow it to,” he adds. Even if it does go that way, it is certainly not any time soon, he notes.
But the path to emotional machines is being paved today. Cowie and his colleagues have already set up a new project to tie the threads together and come up with an agent which can truly interact using voice. Here, new advances in speech recognition technology from other projects will be necessary for full interaction.
In the meantime, plenty of other applications will present themselves. “As our interactions with machines get more and more pervasive, it becomes harder and harder to ignore the emotional element. Taking it into account will become a routine part of computer science courses and computer development,” Cowie concludes.
Adapted from materials provided by ICT Results.
Fausto Intilla

Monday, April 14, 2008

Cornell Robot Sets A Record For Distance Walking



ScienceDaily (Apr. 12, 2008) — We're not sure what brand of batteries it was using, but the Cornell Ranger robot just kept going and going April 3 when it set an unofficial world record by walking nonstop for 45 laps -- a little over 9 kilometers or 5.6 miles -- around the Barton Hall running track.
Developed by a team of students working with Andy Ruina, Cornell professor of theoretical and applied mechanics, the robot walked (and walked) until it finally stopped and fell backward, perhaps because its battery ran down. "We need to do some careful analysis to find out for sure," said Greg Stiesberg, a graduate student on the team.
An earlier version of the same robot had already set a record by free-walking a bit over 1 kilometer, about .62 miles. (Another robot has walked 2.5 kilometers [1.55 miles] on a treadmill, Ruina noted. A six-legged robot has walked a bit more than 2 kilometers, and there's some debate over whether or not that counts.)
There are no rules for such records, Ruina admits, and the Guinness people were not involved. "There's a lot of rigmarole with that," he explained. The event, he said, was to show off the machine's energy efficiency. Unlike other walking robots that use motors to control every movement, the Ranger emulates human walking, using gravity to help swing its legs forward.
Standing still, the robot looks a bit like a tall sawhorse; walking, it suggests a human on crutches, alternately swinging forward two outside legs and then two inside ones. There are no knees, but at the ends of the legs are feet that can be tipped up and down, so that the robot pushes off with its toes, then tilts its feet upward to land on the heels as it brings its legs forward.
The goal of the research, Ruina said, is not only to advance robotics but also to learn more about the mechanics of walking. The information could be applied to rehabilitation and prosthetics for humans and even to improving athletic performance.
Ruina's lab has built several walking robots of various designs. A model with flexible knees, designed to closely imitate human walking, consumed energy per unit weight and distance comparable to a human walker. In contrast, Ruina estimates that the well-known Honda Asimo uses at least 10 times as much energy as a human when walking.
Ironically, Ruina was not present to witness the record-breaking event. By phone, from a conference on locomotion in Columbus, Ohio, he commented, "We've just moved into this world of electromechanical devices, and to make something this robust is a big achievement. We've learned tons about what it takes to make walking work."
Adapted from materials provided by Cornell University.

Fausto Intilla - www.oloscience.com

Wednesday, April 9, 2008

Mobile T-Rays Ready To Go: Terahertz Device Offers Clear View Of Hidden Objects


Source:
ScienceDaily (Apr. 9, 2008) — Terahertz waves, which until now have barely found their way out of the laboratory, could soon be in use as a versatile tool. Researchers have mobilized the transmitting and receiving devices so that they can be used anywhere with ease.
Everybody knows microwaves – but what are terahertz waves? These higher-frequency waves are a real jack-of-all-trades. They can help to detect explosives or drugs without having to open a suitcase or search through items of clothing. They can reveal which substances are flowing through plastic tubes. Doctors even hope that these waves will enable them to identify skin cancer without having to perform a biopsy. In the electromagnetic spectrum, terahertz waves are to be found between infrared radiation and microwaves.
They can penetrate wood, ceramics, paper, plastic or fabrics and are not harmful to humans. On the other hand, they cannot pass through metal. This makes them a universal tool: They change when passing through gases, solid materials or liquids. Each substance leaves its specific fingerprint, be it explosives or water, heroin or blood.
So far, however, the technology has not made a breakthrough, as it is expensive and time-consuming to build the required transmitters and receivers. Now researchers at the Fraunhofer Institute for Physical Measurement Techniques IPM are making the devices mobile. To generate terahertz waves, the scientists use a femtosecond laser which emits extremely short flashes of infrared light.
To illustrate: In one femtosecond, a ray of light moves forward by about the width of a hair. The pulsed light is directed at a semiconductor, where it excites electrons which then emit terahertz waves. In conventional equipment, the laser light moves freely through the room, which makes measurement inflexible and susceptible to vibrations. The Fraunhofer experts have taken a different approach, guiding the light through a glass fiber of a type similar to that used for transmitting data. “Our fiber-based system is so robust that we can simply plug it into a standard 240-volt socket,” says IPM expert Joachim Jonuscheit. This is not the only benefit: Until now the equipment has required a shock-proof base so that measurements are not falsified by vibrations. With the beam path inside a glass fiber, this is no longer necessary.
The advantages are obvious: The transmitters and receivers, which are about the size of beverage cans, are now attached to a flexible cable and can be positioned wherever desired. Since vibrations are no longer a problem, the device can even be deployed on the factory floor with fork-lift trucks driving around and heavy machinery vibrating. No inspection point is too difficult to access, as the glass fiber cables can bridge distances up to 25 meters.
Adapted from materials provided by Fraunhofer-Gesellschaft.
Fausto Intilla