Saturday, July 25, 2009

Silicon With Afterburners: New Process Could Be Boon To Electronics Manufacturer

SOURCE

ScienceDaily (July 24, 2009) — Scientists at Rice University and North Carolina State University have found a method of attaching molecules to semiconducting silicon that may help manufacturers reach beyond the current limits of Moore's Law as they make microprocessors both smaller and more powerful.
Moore's Law, suggested by Intel co-founder Gordon Moore in 1965, said the number of transistors that can be placed on an integrated circuit doubles about every two years. But even Moore has said the law cannot be sustained indefinitely.
The challenge is to get past the limits of doping, a process that has been essential to creating the silicon substrate that is at the heart of all modern integrated circuits, said James Tour, Rice's Chao Professor of Chemistry and professor of mechanical engineering and materials science and of computer science.
Doping introduces impurities into pure crystalline silicon as a way of tuning microscopic circuits to a particular need, and it's been effective so far even in concentrations as small as one atom of boron, arsenic or phosphorus per 100 million of silicon.
But as manufacturers pack more transistors onto integrated circuits by making the circuits ever smaller, doping gets problematic.
"When silicon gets really small, down to the nanoscale, you get structures that essentially have very little volume," Tour said. "You have to put dopant atoms in silicon for it to work as a semiconductor, but now, devices are so small you get inhomogeneities. You may have a few more dopant atoms in this device than in that one, so the irregularities between them become profound."
Manufacturers who put billions of devices on a single chip need them all to work the same way, but that becomes more difficult with the size of a state-of-the-art circuit at 45 nanometers wide -- a human hair is about 100,000 nanometers wide -- and smaller ones on the way.
The paper suggests that monolayer molecular grafting -- basically, attaching molecules to the surface of the silicon rather than mixing them in -- essentially serves the same function as doping, but works better at the nanometer scale. "We call it silicon with afterburners," Tour said. "We're putting an even layer of molecules on the surface. These are not doping in the same way traditional dopants do, but they're effectively doing the same thing."
Tour said years of research into molecular computing with an eye toward replacing silicon has yielded little fruit. "It's hard to compete with something that has trillions of dollars and millions of person-years invested into it. So we decided it would be good to complement silicon, rather than try to supplant it."
He anticipates wide industry interest in the process, in which carbon molecules could be bonded with silicon either through a chemical bath or evaporation. "This is a nice entry point for molecules into the silicon industry. We can go to a manufacturer and say, 'Let us make your fabrication line work for you longer. Let us complement what you have.'
"This gives the Intels and the Microns and the Samsungs of the world another tool to try, and I guarantee you they'll be trying this."
Journal reference:
He et al. Controllable Molecular Modulation of Conductivity in Silicon-Based Devices. Journal of the American Chemical Society, 2009; 131 (29): 10023 DOI:
10.1021/ja9002537
Adapted from materials provided by Rice University.

Thursday, July 23, 2009

Music Is The Engine Of New Lab-on-a-chip Device


ScienceDaily (July 23, 2009) — Music, rather than electromechanical valves, can drive experimental samples through a lab-on-a-chip in a new system developed at the University of Michigan. This development could significantly simplify the process of conducting experiments in microfluidic devices.
A paper on the research will be published online in the Proceedings of the National Academy of Sciences the week of July 20.
A lab-on-a-chip, or microfluidic device, integrates multiple laboratory functions onto one chip just millimeters or centimeters in size. The devices allow researchers to experiment on tiny sample sizes, and also to simultaneously perform multiple experiments on the same material. There is hope that they could lead to instant home tests for illnesses, food contaminants and toxic gases, among other advances.
To do an experiment in a microfluidic device today, researchers often use dozens of air hoses, valves and electrical connections between the chip and a computer to move, mix and split pin-prick drops of fluid in the device's microscopic channels and divots.
"You quickly lose the advantage of a small microfluidic system," said Mark Burns, professor and chair of the Department of Chemical Engineering and a professor in the Department of Biomedical Engineering.
"You'd really like to see something the size of an iPhone that you could sneeze onto and it would tell you if you have the flu. What hasn't been developed for such a small system is the pneumatics—the mechanisms for moving chemicals and samples around on the device."
The U-M researchers use sound waves to drive a unique pneumatic system that does not require electromechanical valves. Instead, musical notes produce the air pressure to control droplets in the device. The U-M system requires only one "off-chip" connection.
"This system is a lot like fiberoptics, or cable television. Nobody's dragging 200 separate wires all over your house to power all those channels," Burns said. "There's one cable signal that gets decoded."
The system developed by Burns, chemical engineering doctoral student Sean Langelier, and their collaborators replaces these air hoses, valves and electrical connections with what are called resonance cavities. The resonance cavities are tubes of specific lengths that amplify particular musical notes.
These cavities are connected on one end to channels in the microfluidic device, and on the other end to a speaker, which is connected to a computer. The computer generates the notes, or chords. The resonance cavities amplify those notes and the sound waves push air through a hole in the resonance cavity to their assigned channel. The air then nudges the droplets in the microfluidic device along.
"Each resonance cavity on the device is designed to amplify a specific tone and turn it into a useful pressure," Langelier said. "If I play one note, one droplet moves. If I play a three-note chord, three move, and so on. And because the cavities don't communicate with each other, I can vary the strength of the individual notes within the chords to move a given drop faster or slower."
Burns describes the set-up as the reverse of a bell choir. Rather than ringing a bell to create sound waves in the air, which are heard as music, this system uses music to create sound waves in the device, which in turn, move the experimental droplets.
"I think this is a very clever system," Burns said. "It's a way to make the connections between the microfluidic world and the real world much simpler."
The new system is still external to the chip, but the researchers are working to make it smaller and incorporate it on a microfluidic device. That would be a step closer to a smartphone-sized home flu test.
The paper is called, "Acoustically-driven programmable liquid motion using resonance cavities." Other authors are U-M chemical engineering graduate students Dustin Chang and Ramsey Zeitoun. The research is funded by the National Institutes of Health and the National Science Foundation. The University is pursuing patent protection for the intellectual property.
Adapted from materials provided by University of Michigan.

Wednesday, July 22, 2009

Cell Phones Turned Into Fluorescent Microscopes

ScienceDaily (July 22, 2009) — Researchers at the University of California, Berkeley, are proving that a camera phone can capture far more than photos of people or pets at play. They have now developed a cell phone microscope, or CellScope, that not only takes color images of malaria parasites, but of tuberculosis bacteria labeled with fluorescent markers.
The prototype CellScope, described in the journal PLoS One, moves a major step forward in taking clinical microscopy out of specialized laboratories and into field settings for disease screening and diagnoses.
"The same regions of the world that lack access to adequate health facilities are, paradoxically, well-served by mobile phone networks," said Dan Fletcher, UC Berkeley associate professor of bioengineering and head of the research team developing the CellScope. "We can take advantage of these mobile networks to bring low-cost, easy-to-use lab equipment out to more remote settings."
The engineers attached compact microscope lenses to a holder fitted to a cell phone. Using samples of infected blood and sputum, the researchers were able to use the camera phone to capture bright field images of Plasmodium falciparum, the parasite that causes malaria in humans, and sickle-shaped red blood cells. They were also able to take fluorescent images of Mycobacterium tuberculosis, the bacterial culprit that causes TB in humans. Moreover, the researchers showed that the TB bacteria could be automatically counted using image analysis software.
"The images can either be analyzed on site or wirelessly transmitted to clinical centers for remote diagnosis," said David Breslauer, co-lead author of the study and a graduate student in the UC San Francisco/UC Berkeley Bioengineering Graduate Group. "The system could be used to help provide early warning of outbreaks by shortening the time needed to screen, diagnose and treat infectious diseases."
The engineers had previously shown that a portable microscope mounted on a mobile phone could be used for bright field microscopy, which uses simple white light - such as from a bulb or sunlight - to illuminate samples. The latest development adds to the repertoire fluorescent microscopy, in which a special dye emits a specific fluorescent wavelength to tag a target - such as a parasite, bacteria or cell - in the sample.
"Fluorescence microscopy requires more equipment - such as filters and special lighting - than a standard light microscope, which makes them more expensive," said Fletcher. "In this paper we've shown that the whole fluorescence system can be constructed on a cell phone using the existing camera and relatively inexpensive components."
The researchers used filters to block out background light and to restrict the light source, a simple light-emitting diode (LED), to the 460 nanometer wavelength necessary to excite the green fluorescent dye in the TB-infected blood. Using an off-the-shelf phone with a 3.2 megapixel camera, they were able to achieve a spatial resolution of 1.2 micrometers. In comparison, a human red blood cell is about 7 micrometers in diameter.
"LEDs are dramatically more powerful now than they were just a few years ago, and they are only getting better and cheaper," said Fletcher. "We had to disabuse ourselves of the notion that we needed to spend many thousands on a mercury arc lamp and high-sensitivity camera to get a meaningful image. We found that a high-powered LED - which retails for just a few dollars - coupled with a typical camera phone could produce a clinical quality image sufficient for our goal of detecting in a field setting some of the most common diseases in the developing world."
The researchers pointed out that while fluorescent microscopes include additional parts, less training is needed to interpret fluorescent images. Instead of sorting out pathogens from normal cells in the images from standard light microscopes, health workers simply need to look for something the right size and shape to light up on the screen.
"Viewing fluorescent images is a bit like looking at stars at night," said Breslauer. "The bright green fluorescent light stands out clearly from the dark background. It's this contrast in fluorescent imaging that allowed us to use standard computer algorithms to analyze the sample containing TB bacteria."
Breslauer added that these software programs can be easily installed onto a typical cell phone, turning the mobile phone into a self-contained field lab and a "good platform for epidemiological monitoring."
While the CellScope is particularly valuable in resource-poor countries, Fletcher noted that it may have a place in this country's health care system, famously plagued with cost overruns.
"A CellScope device with fluorescence could potentially be used by patients undergoing chemotherapy who need to get regular blood counts," said Fletcher. "The patient could transmit from home the image or analyzed data to a health care professional, reducing the number of clinic visits necessary."
The CellScope developers have even been approached by experts in agriculture interested in using it to help diagnose diseases in crops. Instead of sending in a leaf sample to a lab for diagnosis, farmers could upload an image of the diseased leaf for analysis.
The researchers are currently developing more robust prototypes of the CellScope in preparation for further field testing.
Other researchers on the team include Robi Maamari, a UC Berkeley research associate in bioengineering and co-lead author of the study; Neil Switz, a graduate student in UC Berkeley's Biophysics Graduate Group; and Wilbur Lam, a UC Berkeley post-doctoral fellow in bioengineering and a UCSF pediatric hematologist.
Funding for the CellScope project comes from the Center for Information Technology Research in the Interest of Society (CITRIS) and the Blum Center for Developing Economies, both at UC Berkeley, and from Microsoft Research, Intel and the Vodafone Americas Foundation.
Journal reference:
David N. Breslauer et al. Mobile Phone Based Clinical Microscopy for Global Health Applications. PLoS One, July 22, 2009 DOI: 10.1371/journal.pone.0006320
Adapted from materials provided by University of California - Berkeley.

Electronic Nose Created To Detect Skin Vapors


ScienceDaily (July 21, 2009) — A team of researchers from the Yale University (United States) and a Spanish company have developed a system to detect the vapours emitted by human skin in real time. The scientists think that these substances, essentially made up of fatty acids, are what attract mosquitoes and enable dogs to identify their owners.
"The spectrum of the vapours emitted by human skin is dominated by fatty acids. These substances are not very volatile, but we have developed an 'electronic nose' able to detect them", Juan Fernández de la Mora, of the Department of Mechanical Engineering at Yale University (United States) and co-author of a study recently published in the Journal of the American Society for Mass Spectrometry, says.
The system, created at the Boecillo Technology Park in Valladolid, works by ionising the vapours with an electrospray (a cloud of electrically-charged drops), and later analysing these using mass spectrometry. This technique can be used to identify many of the vapour compounds emitted by a hand, for example.
"The great novelty of this study is that, despite the almost non-existent volatility of fatty acids, which have chains of up to 18 carbon atoms, the electronic nose is so sensitive that it can detect them instantaneously", says Fernández de la Mora. The results show that the volatile compounds given off by the skin are primarily fatty acids, although there are also others such as lactic acid and pyruvic acid.
The researcher stresses that the great chemical wealth of fatty acids, made up of hundreds of different molecules, "is well known, and seems to prove the hypothesis that these are the key substances that enable dogs to identify people". The enormous range of vapours emitted by human skin and breath may not only enable dogs to recognise their owners, but also help mosquitoes to locate their hosts, according to several studies.
World record for detecting explosives
Aside from identifying people from their skin vapours, another of the important applications of the new system is that it is able to detect tiny amounts of explosives. The system can "smell" levels below a few parts per trillion, and has been able to set a world sensitivity record at "2x10-14 atmospheres of partial pressure of TNT (the explosive trinitrotoluene)".
The "father" of ionisation using the mass spectrometry electrospray is Professor John B. Fenn, who is currently a researcher at the University of Virginia (United States), and in 2002 won the Nobel Prize in Chemistry for using this technique in the analysis of proteins.
Journal references:
Pablo Martínez Lozano y Juan Fernández de la Mora. Online Detection of Human Skin Vapors. Journal of the American Society for Mass Spectrometry, 20 (6): 1060-1063, 2009
Martínez-Lozano et al. Secondary Electrospray Ionization (SESI) of Ambient Vapors for Explosive Detection at Concentrations Below Parts Per Trillion. Journal of the American Society for Mass Spectrometry, 2009; 20 (2): 287 DOI: 10.1016/j.jasms.2008.10.006
Adapted from materials provided by FECYT - Spanish Foundation for Science and Technology, via EurekAlert!, a service of AAAS.

Friday, July 17, 2009

Human-like Vision Lets Robots Navigate Naturally

ScienceDaily (July 17, 2009) — A robotic vision system that mimics key visual functions of the human brain promises to let robots manoeuvre quickly and safely through cluttered environments, and to help guide the visually impaired.
It’s something any toddler can do – cross a cluttered room to find a toy.
It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.
Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.
In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.
The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.
“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”
How do we see movement?
The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.
These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.
The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.
One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.
“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”
Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.
“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”
Building a better robotic brain
Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.
They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.
“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”
The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.
Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.
”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”
In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.
Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.
Adapted from materials provided by ICT Results.

Solar Power: New SunCatcher Power System Ready For Commercial Production In 2010


ScienceDaily (July 17, 2009) — Stirling Energy Systems (SES) and Tessera Solar recently unveiled four newly designed solar power collection dishes at Sandia National Laboratories’ National Solar Thermal Test Facility (NSTTF). Called SunCatchers™, the new dishes have a refined design that will be used in commercial-scale deployments of the units beginning in 2010.
“The four new dishes are the next-generation model of the original SunCatcher system. Six first-generation SunCatchers built over the past several years at the NSTTF have been producing up to 150KW [kilowatts] of grid-ready electrical power during the day,” says Chuck Andraka, the lead Sandia project engineer. “Every part of the new system has been upgraded to allow for a high rate of production and cost reduction.”
Sandia’s concentrating solar-thermal power (CSP) team has been working closely with SES over the past five years to improve the system design and operation.
The modular CSP SunCatcher uses precision mirrors attached to a parabolic dish to focus the sun’s rays onto a receiver, which transmits the heat to a Stirling engine. The engine is a sealed system filled with hydrogen. As the gas heats and cools, its pressure rises and falls. The change in pressure drives the piston inside the engine, producing mechanical power, which in turn drives a generator and makes electricity.
The new SunCatcher is about 5,000 pounds lighter than the original, is round instead of rectangular to allow for more efficient use of steel, has improved optics, and consists of 60 percent fewer engine parts. The revised design also has fewer mirrors — 40 instead of 80. The reflective mirrors are formed into a parabolic shape using stamped sheet metal similar to the hood of a car. The mirrors are made by using automobile manufacturing techniques. The improvements will result in high-volume production, cost reductions, and easier maintenance.
Among Sandia’s contributions to the new design was development of a tool to determine how well the mirrors work in less than 10 seconds, something that took the earlier design one hour.
“The new design of the SunCatcher represents more than a decade of innovative engineering and validation testing, making it ready for commercialization,” says Steve Cowman, Stirling Energy Systems CEO. “By utilizing the automotive supply chain to manufacture the SunCatcher, we’re leveraging the talents of an industry that has refined high-volume production through an assembly line process. More than 90 percent of the SunCatcher components will be manufactured in North America.”
In addition to improved manufacturability and easy maintenance, the new SunCatcher minimizes both cost and land use and has numerous environmental advantages, Andraka says.
“They have the lowest water use of any thermal electric generating technology, require minimal grading and trenching, require no excavation for foundations, and will not produce greenhouse gas emissions while converting sunlight into electricity,” he says.
Tessera Solar, the developer and operator of large-scale solar projects using the SunCatcher technology and sister company of SES, is building a 60-unit plant generating 1.5 MW (megawatts) by the end of the year either in Arizona or California. One megawatt powers about 800 homes. The proprietary solar dish technology will then be deployed to develop two of the world’s largest solar generating plants in Southern California with San Diego Gas & Electric in the Imperial Valley and Southern California Edison in the Mojave Desert, in addition to the recently announced project with CPS Energy in West Texas. The projects are expected to produce 1,000 MW by the end of 2012.
Last year one of the original SunCatchers set a new solar-to-grid system conversion efficiency record by achieving a 31.25 percent net efficiency rate, toppling the old 1984 record of 29.4.
Adapted from materials provided by Sandia National Laboratories.

Wednesday, July 8, 2009

Robot Learns To Smile And Frown


ScienceDaily (July 8, 2009) — A hyper-realistic Einstein robot at the University of California, San Diego has learned to smile and make facial expressions through a process of self-guided learning. The UC San Diego researchers used machine learning to “empower” their robot to learn to make realistic facial expressions.
“As far as we know, no other research group has used machine learning to teach a robot to make realistic facial expressions,” said Tingfan Wu, the computer science Ph.D. student from the UC San Diego Jacobs School of Engineering who presented this advance on June 6 at the IEEE International Conference on Development and Learning.
The faces of robots are increasingly realistic and the number of artificial muscles that controls them is rising. In light of this trend, UC San Diego researchers from the Machine Perception Laboratory are studying the face and head of their robotic Einstein in order to find ways to automate the process of teaching robots to make lifelike facial expressions.
This Einstein robot head has about 30 facial muscles, each moved by a tiny servo motor connected to the muscle by a string. Today, a highly trained person must manually set up these kinds of realistic robots so that the servos pull in the right combinations to make specific face expressions. In order to begin to automate this process, the UCSD researchers looked to both developmental psychology and machine learning.
Developmental psychologists speculate that infants learn to control their bodies through systematic exploratory movements, including babbling to learn to speak. Initially, these movements appear to be executed in a random manner as infants learn to control their bodies and reach for objects.
“We applied this same idea to the problem of a robot learning to make realistic facial expressions,” said Javier Movellan, the senior author on the paper presented at ICDL 2009 and the director of UCSD’s Machine Perception Laboratory, housed in Calit2, the California Institute for Telecommunications and Information Technology.
Although their preliminary results are promising, the researchers note that some of the learned facial expressions are still awkward. One potential explanation is that their model may be too simple to describe the coupled interactions between facial muscles and skin.
To begin the learning process, the UC San Diego researchers directed the Einstein robot head (Hanson Robotics’ Einstein Head) to twist and turn its face in all directions, a process called “body babbling.” During this period the robot could see itself on a mirror and analyze its own expression using facial expression detection software created at UC San Diego called CERT (Computer Expression Recognition Toolbox). This provided the data necessary for machine learning algorithms to learn a mapping between facial expressions and the movements of the muscle motors.
Once the robot learned the relationship between facial expressions and the muscle movements required to make them, the robot learned to make facial expressions it had never encountered.
For example, the robot learned eyebrow narrowing, which requires the inner eyebrows to move together and the upper eyelids to close a bit to narrow the eye aperture.
“During the experiment, one of the servos burned out due to misconfiguration. We therefore ran the experiment without that servo. We discovered that the model learned to automatically compensate for the missing servo by activating a combination of nearby servos,” the authors wrote in the paper presented at the 2009 IEEE International Conference on Development and Learning.
“Currently, we are working on a more accurate facial expression generation model as well as systematic way to explore the model space efficiently,” said Wu, the computer science PhD student. Wu also noted that the “body babbling” approach he and his colleagues described in their paper may not be the most efficient way to explore the model of the face.
While the primary goal of this work was to solve the engineering problem of how to approximate the appearance of human facial muscle movements with motors, the researchers say this kind of work could also lead to insights into how humans learn and develop facial expressions.
Learning to Make Facial Expressions,” by Tingfan Wu, Nicholas J. Butko, Paul Ruvulo, Marian S. Bartlett, Javier R. Movellan from Machine Perception Laboratory, University of California San Diego. Presented on June 6 at the 2009 IEEE 8th International Conference On Development And Learning.
Adapted from materials provided by University of California - San Diego.

Tuesday, July 7, 2009

Robo-bats With Metal Muscles May Be Next Generation Of Remote Control Flyers


ScienceDaily (July 7, 2009) — Tiny flying machines can be used for everything from indoor surveillance to exploring collapsed buildings, but simply making smaller versions of planes and helicopters doesn't work very well. Instead, researchers at North Carolina State University are mimicking nature's small flyers – and developing robotic bats that offer increased maneuverability and performance.
Small flyers, or micro-aerial vehicles (MAVs), have garnered a great deal of interest due to their potential applications where maneuverability in tight spaces is necessary, says researcher Gheorghe Bunget. For example, Bunget says, "due to the availability of small sensors, MAVs can be used for detection missions of biological, chemical and nuclear agents." But, due to their size, devices using a traditional fixed-wing or rotary-wing design have low maneuverability and aerodynamic efficiency.
So Bunget, a doctoral student in mechanical engineering at NC State, and his advisor Dr. Stefan Seelecke looked to nature. "We are trying to mimic nature as closely as possible," Seelecke says, "because it is very efficient. And, at the MAV scale, nature tells us that flapping flight – like that of the bat – is the most effective."
The researchers did extensive analysis of bats' skeletal and muscular systems before developing a "robo-bat" skeleton using rapid prototyping technologies. The fully assembled skeleton rests easily in the palm of your hand and, at less than 6 grams, feels as light as a feather. The researchers are currently completing fabrication and assembly of the joints, muscular system and wing membrane for the robo-bat, which should allow it to fly with the same efficient flapping motion used by real bats.
"The key concept here is the use of smart materials," Seelecke says. "We are using a shape-memory metal alloy that is super-elastic for the joints. The material provides a full range of motion, but will always return to its original position – a function performed by many tiny bones, cartilage and tendons in real bats."
Seelecke explains that the research team is also using smart materials for the muscular system. "We're using an alloy that responds to the heat from an electric current. That heat actuates micro-scale wires the size of a human hair, making them contract like 'metal muscles.' During the contraction, the powerful muscle wires also change their electric resistance, which can be easily measured, thus providing simultaneous action and sensory input. This dual functionality will help cut down on the robo-bat's weight, and allow the robot to respond quickly to changing conditions – such as a gust of wind – as perfectly as a real bat."
In addition to creating a surveillance tool with very real practical applications, Seelecke says the robo-bat could also help expand our understanding of aerodynamics. "It will allow us to do tests where we can control all of the variables – and finally give us the opportunity to fully understand the aerodynamics of flapping flight," Seelecke says.
Bunget will present the research this September at the American Society of Mechanical Engineers Conference on Smart Materials, Adaptive Structures and Intelligent Systems in Oxnard, Calif.
Adapted from materials provided by North Carolina State University.

Quadriplegics Can Operate Powered Wheelchair With Tongue Drive System


ScienceDaily (July 6, 2009) — An assistive technology that enables individuals to maneuver a powered wheelchair or control a mouse cursor using simple tongue movements can be operated by individuals with high-level spinal cord injuries, according to the results of a recently completed clinical trial.
"This clinical trial has validated that the Tongue Drive system is intuitive and quite simple for individuals with high-level spinal cord injuries to use," said Maysam Ghovanloo, an assistant professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. "Trial participants were able to easily remember and correctly issue tongue commands to play computer games and drive a powered wheelchair around an obstacle course with very little prior training."
At the annual conference of the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) on June 26, the researchers reported the results of the first five clinical trial subjects to use the Tongue Drive system. The trial was conducted at the Shepherd Center, an Atlanta-based catastrophic care hospital, and funded by the National Science Foundation and the Christopher and Dana Reeve Foundation.
The clinical trial tested the ability of these individuals with tetraplegia, as a result of high-level spinal cord injuries (cervical vertebrae C3-C5), to perform tasks related to computer access and wheelchair navigation -- using only their tongue movements.
At the beginning of each trial, Ghovanloo and graduate students Xueliang Huo and Chih-wen Cheng attached a small magnet -- the size of a grain of rice -- to the participant's tongue with tissue adhesive. Movement of this magnetic tracer was detected by an array of magnetic field sensors mounted on wireless headphones worn by the subject. The sensor output signals were wirelessly transmitted to a portable computer, which was carried on the wheelchair.
The signals were processed to determine the relative motion of the magnet with respect to the array of sensors in real-time. This information was then used to control the movements of the cursor on a computer screen or to substitute for the joystick function in a powered wheelchair. Details on use of the Tongue Drive for wheeled mobility were published in the June 2009 issue of the journal IEEE Transactions on Biomedical Engineering.
Ghovanloo chose the tongue to operate the system because unlike hands and feet, which are controlled by the brain through the spinal cord, the tongue is directly connected to the brain by a cranial nerve that generally escapes damage in severe spinal cord injuries or neuromuscular diseases.
Before using the Tongue Drive system, the subjects trained the computer to understand how they would like to move their tongues to indicate different commands. A unique set of specific tongue movements was tailored for each individual based on the user's abilities, oral anatomy and personal preferences. For the first computer test, the user issued commands to move the computer mouse left and right. Using these commands, each subject played a computer game that required moving a paddle horizontally to prevent a ball from hitting the bottom of the screen.
After adding two more commands to their repertoire -- up and down -- the subjects were asked to move the mouse cursor through an on-screen maze as quickly and accurately as possible.
Then the researchers added two more commands -- single and double mouse clicks -- to provide the subject with complete mouse functionality. When a randomly selected symbol representing one of the six commands appeared on the computer screen, the subject was instructed to issue that command within a specified time period. Each subject completed 40 trials for each time period.
After the computer sessions, the subjects were ready for the wheelchair driving exercise. Using forward, backward, right, left and stop/neutral tongue commands, the subjects maneuvered a powered wheelchair through an obstacle course.
The obstacle course contained 10 turns and was longer than a professional basketball court. Throughout the course, the users had to perform navigation tasks such as making a U-turn, backing up and fine-tuning the direction of the wheelchair in a limited space. Subjects were asked to navigate through the course as fast as they could, while avoiding collisions.
Each subject operated the powered wheelchair using two different control strategies: discrete mode, which was designed for novice users, and continuous mode for more experienced users. In discrete mode, if the user issued the command to move forward and then wanted to turn right, the user would have to stop the wheelchair before issuing the command to turn right. The stop command was selected automatically when the tongue returned to its resting position, bringing the wheelchair to a standstill.
"Discrete mode is a safety feature particularly for novice users, but it reduces the agility of the wheelchair movement," explained Ghovanloo. "In continuous mode, however, the user is allowed to steer the powered wheelchair to the left or right as it is moving forward and backward, thus making it possible to follow a curve."
Each subject completed the course at least twice using each strategy while the researchers recorded the navigation time and number of collisions. Using discrete control, the average speed for the five subjects was 5.2 meters per minute and the average number of collisions was 1.8. Using continuous control, the average speed was 7.7 meters per minute and the average number of collisions was 2.5.
While this initial performance trial only required six tongue commands, the Tongue Drive system can potentially capture a large number of tongue movements, each of which can represent a different user command. The ability to train the system with as many commands as an individual can comfortably remember and having all of the commands available to the user at the same time are significant advantages over the common sip-n-puff device that acts as a simple switch controlled by sucking or blowing through a straw.
Some sip-n-puff users also consider the straw to be a symbol of their disability. Since Tongue Drive users simply wear headphones that are commonly worn to listen to music, the system is more acceptable to potential users.
John Anschutz, manager of the assistive technology program at the Shepherd Center, identified advantages the Tongue Drive system has over the tongue-touch keypad.
"The Tongue Drive system seems to be much more supportable if there were a failure of some component within the system. With the old tongue-touch keypad, if the system went down then the user lost all of the functions of the wheelchair, phone, computer and environmental control," explained Anschutz. "Ghovanloo's approach should be much more repairable should a fault arise, which is critical for systems for which so much function is depended upon."
A future system upgrade will be to move the sensors inside the user's mouth, according to Ghovanloo. This will be an important step for users who are very impaired and cannot reposition the system for best results, according to Anschutz.
"All of the subjects successfully completed the computer and powered wheelchair navigation tasks with their tongues without difficulty, which demonstrates that the Tongue Drive system can potentially provide individuals unable to move their arms and hands with effective control over a wide variety of devices they use in their daily lives," said Ghovanloo.
Adapted from materials provided by Georgia Institute of Technology, via EurekAlert!, a service of AAAS.

Robot Soccer: Cooperative Soccer Playing Robots Compete


ScienceDaily (July 6, 2009) — The cooperative soccer playing robots of the Universität Stuttgart are world champions in the middle size league of robot soccer. After one of the most interesting competitions in the history of Robocup from 29th June to 5th July, 2009, in Graz, the 1. RFC Stuttgart on the last day of the competition succeeded in winning the world championship 2009 in an exciting game against the team of Tech United from Eindhoven (The Netherlands) with the final result of 4:1.
During the competition Stuttgart's robots had to make their way against 13 other teams from eight countries, among them the current world champion Cambada (Portugal). Besides the teams from Germany, Italy, The Netherlands, Portugal, and Austria, teams from China, Japan, and Iran competed against each other.
The 1.RFC Stuttgart includes staff of two Institutes, namely the department of Image Understanding (Head: Prof. Levi) of the Institute of Parallel and Distributed Systems and the Institute of Technical Optics (Head: Prof. Osten), achieved also the 2nd place at the so-called "technical challenge" and a further 1st place at the "scientific challenge".
After the final match of the competition, the middle-size league robots of the 1. RFC Stuttgart - the new world champion - had to play against the human officials of the RoboCup federation. It turned out, that hereby the robots were the inferior team. Clearly the RoboCup community has still to bridge a vast distance to reach their final goal to let a humanoid robot team play against the human world champion by the year 2050.
The success tells its own tale but one might wonder which scientific interest is behind the RoboCup competitions. Preconditions for the successful participation at these competitions are extensive efforts in current research topics of computer science such as real-time image processing and architectures, cooperative robotics and distributed planning. Possible application scenarios of these research activities reach from autonomous vehicles, cooperative manufacturing robotics, service robotics to the point of planetary or deep-sea exploration by autonomous robotic systems. In this context autonomous means that no or only a limited human intervention is necessary.
Adapted from materials provided by University of Stuttgart.

Sunday, July 5, 2009

Ultrasensitive Detector Promises Improved Treatment Of Viral Respiratory Infections


ScienceDaily (July 6, 2009) — A Vanderbilt chemist and a biomedical engineer have teamed up to develop a respiratory virus detector that is sensitive enough to detect an infection at an early stage, takes only a few minutes to return a result and is simple enough to be performed in a pediatrician's office.
Writing in The Analyst – a journal published by the Royal Society of Chemistry – the developers report that their technique, which uses DNA hairpins attached to gold filaments, can detect the presence of respiratory syncytial virus (RSV) – a leading cause of respiratory infections in infants and young children – at substantially lower levels than the standard laboratory assay.
"We hope that our research will help us break out of the catch-22 that is holding back major advances in the treatment of respiratory viruses," says Associate Professor of Chemistry David Wright, who is working with Professor of Biomedical Engineering Frederick "Rick" Haselton on the new detection method.
According to the chemist, major pharmaceutical companies are not investing in the development of antiviral drugs for RSV and the other major respiratory viruses because there is no way to detect the infections early enough for the drugs to work effectively without harmful side-effects. "There are antiviral compounds out there – we have discovered some of them in my lab – that would work if we can detect the virus early enough, before there is too much virus in the system," he says.
In addition, the lack of a reliable early detection system adds to the growing problem of antibiotic resistance. The symptoms of respiratory infections caused by viral agents are nearly identical to those caused by bacteria. As a result, antibiotics, which target bacteria, are often incorrectly prescribed for viral infections. Not only is this ineffective, but it also increases the number of antibiotic-resistant strains.
Currently, there are several standard tests for RSV including culturing the virus, polymerase chain reaction (PCR) and the enzyme-linked immunosorbent assay (ELISA). To have any of these tests done, doctors must send a mucous sample from a patient to a special laboratory. When combined with delivery times, backlogs and other delays, it frequently takes a day or more to get the results. Unfortunately, respiratory viruses multiply so rapidly that this can be too late for antiviral drugs to work, Wright says.
By contrast, "our system could easily be packaged in a disposable device about the size of a ballpoint pen," says Haselton. To perform a test, all that would be required is to pull off a cap that will expose a length of gold wire, dip the wire in the sample, pull the wire through the device and put the exposed wire into a fluorescence scanner. If it lights up, then the virus is present.
The new detector design is a combination of two existing technologies.
One is the filament-based antibody recognition assay (FARA) developed several years ago by Haselton and patented by Vanderbilt. FARA uses antibodies – special proteins produced by the immune system that binds to specific foreign substances – that are coated on the surface of a polyester filament. When the coated filament is exposed to a sample, if it contains any of the target molecules, they stick to the antibodies, forming complexes that can be detected with fluorescent dyes. One advantage of this approach is that a sample can be put through different processing steps simply by pulling the filament through a series of small chambers. In the RSV detection application, the chambers contain washing solutions that remove non-specific binding molecules.
"Originally we thought that we would have to put special seals between the chambers but we found that if we make the openings small enough, then the solutions in the chambers stay in place as we pull the wire through," says Haselton.
The second technology is based on molecular beacon probes, an approach often used in PCR. The probes consist of short lengths of single-strand DNA that normally form a hairpin shape but straighten out when they are bound to a target molecule. A fluorescent dye molecule is attached to one leg of the hairpin and a molecule that quenches its fluorescence is attached to the other. When the probe is in its hairpin configuration, the dye and quencher molecules lay side by side so the probe does not fluoresce. When it is bound to a target, such as a piece of viral RNA, the ends spring apart, turning on the probe's fluorescence.
The Vanderbilt researchers realized that if they attached molecular beacons to a gold-coated filament, the gold could theoretically replace the quencher molecule and inhibit the beacon's fluorescence. However, they had to find a linking molecule – the molecule that attaches the beacon to the wire – that was just the right length to make it work.
Once they solved this problem, the researchers tested the sensitivity of the new system. They found that it could detect the presence of RSV virus particles at levels that are 200 times below the minimum detection level of the standard ELISA method. This extreme sensitivity combined with the basic simplicity of the approach makes it "attractive for further development as a viral detection platform," the scientists write in the Analyst article, which was published online May 15.
According to Haselton, there are two areas where further development is needed. One is sample preparation. Commercial RNA sample preparation kits are available, but they are more expensive and complex than desirable. The team is currently examining the design of a simple pull-through RNA isolation chamber. The team is also exploring ways to reduce false detections. There are a lot of other molecules in mucous besides viral RNA that can bind to some extent with the molecular beacons. However, the researchers argue that it should be possible to reduce the number of false positives significantly by adding a heating step that is calibrated to drive off the molecules that are less strongly bound to the beacons than the viral RNA.
The next major step in the development process is to see how the device performs with real patient samples.
This research was supported by grants from Vanderbilt University and the National Institutes of Health.
Adapted from materials provided by Vanderbilt University.

Innovative Technology Shatters The Barriers Of Modern Light Microscopy


ScienceDaily (July 5, 2009) — Researchers at the Helmholtz Zentrum München and the Technische Universität München are using a combination of light and ultrasound to visualize fluorescent proteins that are seated several centimeters deep into living tissue. In the past, even modern technologies have failed to produce high-resolution fluorescence images from this depth because of the strong scattering of light.
In the Nature Photonics journal, the Munich researchers describe how they can reveal genetic expression within live fly larvae and fish by “listening to light”. In the future this technology may facilitate the examination of tumors or coronary vessels in humans.
Since the dawn of the microscope scientists have been using light to scrutinize thin sections of tissue to ascertain whether they are healthy or diseased or to investigate cell function. However, the penetration limits for this kind of examination lie between half a millimeter and one millimeter of tissue. In thicker layers light is diffused so strongly that all useful details are obscured.
Together with his research team, Professor Vasilis Ntziachristos, director of the Institute of Biological and Medical Imaging of the Helmholtz Zentrum München – German Research Center for Environmental Health and chair for biological imaging at the Technische Universität München, has now broken through this barrier and rendered three-dimensional images through at least six millimeters of tissue, allowing whole-body visualization of adult zebra fish.
To achieve this feat, Prof. Ntziachristos and his team made light audible. They illuminated the fish from multiple angles using flashes of laser light that are absorbed by fluorescent pigments in the tissue of the genetically modified fish. The fluorescent pigments absorb the light, a process that causes slight local increases temperature, which in turn result in tiny local volume expansions. This happens very quickly and creates small shock waves. In effect, the short laser pulse gives rise to an ultrasound wave that the researchers pick up with an ultrasound microphone.
The real power of the technique, however, lies in specially developed mathematical formulas used to analyze the resulting acoustic patterns. An attached computer uses these formulas to evaluate and interpret the specific distortions caused by scales, muscles, bones and internal organs to generate a three-dimensional image.
The result of this “multi-spectral opto-acoustic tomography”, or MSOT, is an image with a striking spatial resolution better than 40 micrometers (four hundredths of a millimeter). And best of all, the sedated fish wakes up and recovers without harm following the procedure.
Dr. Daniel Razansky, who played a pivotal role in developing the method, says, "This opens the door to a whole new universe of research. For the first time, biologists will be able to optically follow the development of organs, cellular function and genetic expression through several millimeters to centimeters of tissue.”
In the past, understanding the evolution of development or of disease required numerous animals to be sacrificed. With a plethora of fluorochrome pigments to choose from – including pigments using the fluorescence protein technology for which a Nobel Prize was awarded in 2008 and clinically approved fluorescent agents – observing metabolic and molecular processes in all kinds of living organisms, from fish to mice and humans, will be possible. The fruits of pharmaceutical research can also be harvested faster since the molecular effects of new treatments can be observed in the same animals over an extended period of time.
Bio-engineer Ntziachristos is convinced that, “MSOT can truly revolutionize biomedical research, drug discovery and healthcare. Since MSOT allows optical and fluorescence imaging of tissue to a depth of several centimeters, it could become the method of choice for imaging cellular and subcellular processes throughout entire living tissues.”
Journal reference:
Razansky et al. Multispectral opto-acoustic tomography of deep-seated fluorescent proteins in vivo. Nature Photonics, 2009; 3 (7): 412 DOI: 10.1038/nphoton.2009.98
Adapted from materials provided by Helmholtz Zentrum München - German Research Center for Environmental Health.

Friday, July 3, 2009

Nanotechnology May Increase Longevity Of Dental Fillings


ScienceDaily (July 3, 2009) — Tooth-colored fillings may be more attractive than silver ones, but the bonds between the white filling and the tooth quickly age and degrade. A Medical College of Georgia researcher hopes a new nanotechnology technique will extend the fillings' longevity.
"Dentin adhesives bond well initially, but then the hybrid layer between the adhesive and the dentin begins to break down in as little as one year," says Dr. Franklin Tay, associate professor of endodontics in the MCG School of Dentistry. "When that happens, the restoration will eventually fail and come off the tooth."
Half of all tooth-colored restorations, which are made of composite resin, fail within 10 years, and about 60 percent of all operative dentistry involves replacing them, according to research in the Journal of the American Dental Association.
"Our adhesives are not as good as we thought they were, and that causes problems for the bonds," Dr. Tay says.
To make a bond, a dentist etches away some of the dentin's minerals with phosphoric acid to expose a network of collagen, known as the hybrid layer. Acid-etching is like priming a wall before it's painted; it prepares the tooth for application of an adhesive to the hybrid layer so that the resin can latch on to the collagen network. Unfortunately, the imperfect adhesives leave spaces inside the collagen that are not properly infiltrated with resin, leading to the bonds' failure.
Dr. Tay is trying to prevent the aging and degradation of resin-dentin bonding by feeding minerals back into the collagen network. With a two year, $252,497 grant from the National Institute of Dental & Craniofacial Research, he will investigate guided tissue remineralization, a new nanotechnology process of growing extremely small, mineral-rich crystals and guiding them into the demineralized gaps between collagen fibers.
His idea came from examining how crystals form in nature. "Eggshells and abalone [sea snail] shells are very strong and intriguing," Dr. Tay says. "We're trying to mimic nature, and we're learning a lot from observing how small animals make their shells."
The crystals, called hydroxyapatite, bond when proteins and minerals interact. Dr. Tay will use calcium phosphate, a mineral that's the primary component of dentin, enamel and bone, and two protein analogs also found in dentin so he can mimic nature while controlling the size of each crystal.
Crystal size is the real challenge, Dr. Tay says. Most crystals are grown from one small crystal into a larger, homogeneous one that is far too big to penetrate the spaces within the collagen network. Instead, Dr. Tay will fit the crystal into the space it needs to fill. "When crystals are formed, they don't have a definite shape, so they are easily guided into the nooks and crannies of the collagen matrix," he says.
In theory, the crystals should lock the minerals into the hybrid layer and prevent it from degrading. If Dr. Tay's concept of guided tissue remineralization works, he will create a delivery system to apply the crystals to the hybrid layer after the acid-etching process.
"Instead of dentists replacing the teeth with failed bonds, we're hoping that using these crystals during the bond-making process will provide the strength to save the bonds," Dr. Tay says. "Our end goal is that this material will repair a cavity on its own so that dentists don't have to fill the tooth."
Adapted from materials provided by Medical College of Georgia.

Disaster Setting At The RoboCup 2009: Flight And Rescue Robots Demonstrated Their Abilities

ScienceDaily (July 3, 2009) — Modern robotics can help where it is too dangerous for humans to venture. Search and rescue robots (S&R robots) have meanwhile become so sophisticated that they have already carried out their first missions in disasters. And for this reason rescue robots will be given a special place at the RoboCup 2009 – the robotics world championships in Graz.
The rescue robotics programme provided exciting rescue demonstrations in which two complex disaster scenarios formed the setting for the robots’ performances. An accident involving a passenger car loaded with hazardous materials and a fire on the rooftop of Graz Stadthalle were the two challenges that flight and rescue robots faced on their remote controlled missions. Smoke and flames made the sets as realistic as possible, ensuring a high level of thrills.
Blazing flames on the eighth floor of a skyscraper means that the reconnaissance and search for injured would already be life threatening for fire services. A remote controlled flight robot can help by reconnoitering the situation and sending information by video signals to the rescue services on the ground. As the robotics world championships, the RoboCup recognised the possible uses of rescue robots a long time ago and promoted their development in the separate category “RoboCup Rescue”. RoboCup 2009, organised by TU Graz, dedicates one particular focus to the lifesaving robots with a rescue robot demonstration, a practical course for first responders and a workshop for the exchange of experiences between rescue services and robotics researchers.
A burning rooftop and hazardous materials
Fire and smoke were seen in front of the Graz Stadthalle on Thursday 2nd July 2009, and yet there was no cause for panic – rescue robots were in action. To demonstrate the capabilities of flight and rescue robots, two disaster scenarios were re-enacted as realistically as possible. A crashed automobile loaded with hazardous materials provided a challenge to the rescue robot. Operated by rescuers by remote control, the metal helper named “Telemax” had to retrieve the sensitive substances and bring them out of the danger zone. The flight robot had to find a victim on the rooftop of the Stadthalle und send information in the form of video signals.
Emergency services meet their future helpers
There is an introduction to possible applications of today’s rescue robotics together with a practical course specially for first responders. In the training courses on 3rd and 4th July from 8 to 10am, search and rescue services from the whole world over can practise operating flight robots, go on a reconnaissance mission in a specially designed rescue area with rescue robots or practise various manipulation tasks and recover hazardous materials or retrieve injured persons using remote controlled robots.
A workshop on the topic of rescue robotics will take place following the RoboCup on the 6th July 2009 at TU Graz. The focus will be on an exchange of experiences between first responders and robotics researchers.
Adapted from materials provided by TU Graz, via AlphaGalileo.

Thursday, July 2, 2009

Students Create Portable Device To Detect Suicide Bombers

ScienceDaily (July 2, 2009) — Improvised explosive devices (IEDs), the weapons of suicide bombers, are a major cause of soldier casualties in Iraq and Afghanistan. A group of University of Michigan engineering undergraduate students has developed a new way to detect them.
The students invented portable, palm-sized metal detectors that could be hidden in trash cans, under tables or in flower pots, for example. The detectors are designed to be part of a wireless sensor network that conveys to a base station where suspicious objects are located and who might be carrying them. Compared with existing technology, the sensors are cheaper, lower-power and longer-range. Each of the sensors weighs about 2 pounds.
"Their invention outperforms everything that exists in the market today," said Nilton Renno, a professor in the U-M Department of Atmospheric, Oceanic and Space Sciences. The students undertook this project in Renno's Engineering 450 senior level design class.
"They clearly have an excellent understanding of the problem. They also thought strategically and designed and optimized their solution. The combination of a movable command center with a wireless sensor network can be easily deployed in the field and adapted to different situations."
The core technology is based on a magnetometer, or metal detector, explained Ashwin Lalendran, an engineering student who worked on the project and graduated in May.
"We built it entirely in-house—the hardware and the software," Lalendran said. "Our sensors are small, flexible to deploy, inexpensive and scalable. It's extremely novel technology."
The U-M students recently won an Air Force-sponsored competition with Ohio State University. The U.S. Air Force Research Laboratory at Wright Patterson Air Force Base sponsored the project as well as the contest. Air Force research labs across the country sponsor similar contests on a regular basis to provide rapid reaction and innovative solutions to the Department of Defense's urgent needs, says Capt. Nate Terning, AFRL rapid reaction projects director.
The teams from U-M and Ohio State demonstrated their inventions June 2-3 in Dayton, Ohio at a mock large tailgate event where simulated IEDs and the students' technologies were hidden among the crowd. The students' technology was tasked with finding IEDs in the purses, backpacks or other packages of the tailgaters, without the tailgaters' knowledge. Michigan's invention found more IEDs than Ohio State's.
"We had an excellent turnout in technology," Terning said. "Regardless of the competition results, often successful ideas from each student team can be combined into a product which is then realized for DoD use in the future."
The students will continue to work on this project through the summer. Other students involved are: Steve Boland, a senior atmospheric, oceanic and space sciences major; Andry Supian a mechanical engineering major who graduated in April; Brian Hale, a senior aerospace engineering major; Kevin Huang, a junior computer science major; Michael Shin, a junior computer engineering major; and Vitaly Shatkovsky, a mechanical engineering major who graduated in April.
"I am very proud of the team for applying a sound engineering approach and a lot of imagination to the solution of an extremely difficult real-world problem. They worked well together and never gave up when the going got rough," said Bruce Block, an engineer in the Space Physics Research Laboratory who worked with the students.
Other Space Physics Research Lab engineers who assisted are Steve Musko and Steve Rogacki.
Adapted from materials provided by University of Michigan.

Optical Computer Closer: Optical Transistor Made From Single Molecule

SOURCE

ScienceDaily (July 2, 2009) — ETH Zurich researchers have successfully created an optical transistor from a single molecule. This has brought them one step closer to an optical computer.
Internet connections and computers need to be ever faster and more powerful nowadays. However, conventional central processing units (CPUs) limit the performance of computers, for example because they produce an enormous amount of heat. The millions of transistors that switch and amplify the electronic signals in the CPUs are responsible for this. One square centimeter of CPU can emit up to 125 watts of heat, which is more than ten times as much as a square centimeter of an electric hotplate.
Photons instead of electrons
This is why scientists have been trying for some time to find ways to produce integrated circuits that operate on the basis of photons instead of electrons. The reason is that photons do not only generate much less heat than electrons, but they also enable considerably higher data transfer rates.
Although a large part of telecommunications engineering nowadays is based on optical signal transmission, the necessary encoding of the information is generated using electronically controlled switches. A compact optical transistor is still a long way off. Vahid Sandoghdar, Professor at the Laboratory of Physical Chemistry of ETH Zurich, explains that, “Comparing the current state of this technology with that of electronics, we are somewhat closer to the vacuum tube amplifiers that were around in the fifties than we are to today’s integrated circuits.”
However, his research group has now achieved a decisive breakthrough by successfully creating an optical transistor with a single molecule. For this, they have made use of the fact that a molecule’s energy is quantized: when laser light strikes a molecule that is in its ground state, the light is absorbed. As a result, the laser beam is quenched. Conversely, it is possible to release the absorbed energy again in a targeted way with a second light beam. This occurs because the beam changes the molecule’s quantum state, with the result that the light beam is amplified. This so-called stimulated emission, which Albert Einstein described over 90 years ago, also forms the basis for the principle of the laser.
Focusing on a nano scale
Jaesuk Hwang, first author of the study and a scientific member of Sandoghdar’s nano-optics group, explains that, “Amplification in a conventional laser is achieved by an enormous number of molecules.” By focusing a laser beam on only a single tiny molecule, the ETH Zurich scientists have now been able to generate stimulated emission using just one molecule. They were helped in this by the fact that, at low temperatures, molecules seem to increase their apparent surface area for interaction with light . The researchers therefore needed to cool the molecule down to minus 272 degrees Celsius (minus 457.6 degrees Fahrenheit), i.e. one degree above absolute zero. In this case, the enlarged surface area corresponded approximately to the diameter of the focused laser beam.
Switching light with light
By using one laser beam to prepare the quantum state of a single molecule in a controlled fashion, scientists could significantly attenuate or amplify a second laser beam. This mode of operation is identical to that of a conventional transistor, in which electrical potential can be used to modulate a second signal.
Thus component parts such as the new single molecule transistor may also pave the way for a quantum computer. Sandoghdar says, “Many more years of research will still be needed before photons replace electrons in transistors. In the meantime, scientists will learn to manipulate and control quantum systems in a targeted way, moving them closer to the dream of a quantum computer.”
Journal reference:
J. Hwang, M. Pototschnig, R. Lettow, G. Zumofen, A. Renn, S. Götzinger, V. Sandoghda. A single-molecule opzical transistor. Nature, 460, 76-80 DOI: 10.1038/nature08134
Adapted from materials provided by ETH Zurich.

Inexpensive Thin Printable Batteries Developed

SOURCE

ScienceDaily (July 2, 2009) — For a long time, batteries were bulky and heavy. Now, a new cutting-edge battery is revolutionizing the field. It is thinner than a millimeter, lighter than a gram, and can be produced cost-effectively through a printing process.
In the past, it was necessary to race to the bank for every money transfer and every bank statement. Today, bank transactions can be easily carried out at home. Now where is that piece of paper again with the TAN numbers? In the future you can spare yourself the search for the number. Simply touch your EC card and a small integrated display shows the TAN number to be used. Just type in the number and off you go. This is made possible by a printable battery that can be produced cost-effectively on a large scale.
It was developed by a research team led by Prof. Dr. Reinhard Baumann of the Fraunhofer Research Institution for Electronic Nano Systems ENAS in Chemnitz together with colleagues from TU Chemnitz and Menippos GmbH. “Our goal is to be able to mass produce the batteries at a price of single digit cent range each,” states Dr. Andreas Willert, group manager at ENAS.
The characteristics of the battery differ significantly from those of conventional batteries. The printable version weighs less than one gram on the scales, is not even one millimeter thick and can therefore be integrated into bank cards, for example. The battery contains no mercury and is in this respect environmentally friendly. Its voltage is 1.5 V, which lies within the normal range. By placing several batteries in a row, voltages of 3 V, 4.5 V and 6 V can also be achieved. The new type of battery is composed of different layers: a zinc anode and a manganese cathode, among others. Zinc and manganese react with one another and produce electricity. However, the anode and the cathode layer dissipate gradually during this chemical process. Therefore, the battery is suitable for applications which have a limited life span or a limited power requirement, for instance greeting cards.
The batteries are printed using a silk-screen printing method similar to that used for t-shirts and signs. A kind of rubber lip presses the printing paste through a screen onto the substrate. A template covers the areas that are not to be printed on. Through this process it is possible to apply comparatively large quantities of printing paste, and the individual layers are slightly thicker than a hair. The researchers have already produced the batteries on a laboratory scale. At the end of this year, the first products could possibly be finished.
Adapted from materials provided by Fraunhofer-Gesellschaft.

Wednesday, July 1, 2009

New Statistical Technique Improves Precision Of Nanotechnology Data

SOURCE

ScienceDaily (June 30, 2009) — A new statistical analysis technique that identifies and removes systematic bias, noise and equipment-based artifacts from experimental data could lead to more precise and reliable measurement of nanomaterials and nanostructures likely to have future industrial applications.
Known as sequential profile adjustment by regression (SPAR), the technique could also reduce the amount of experimental data required to make conclusions, and help distinguish true nanoscale phenomena from experimental error. Beyond nanomaterials and nanostructures, the technique could also improve reliability and precision in nanoelectronics measurements – and in studies of certain larger-scale systems.
Accurate understanding of these properties is critical to the development of future high-volume industrial applications for nanomaterials and nanostructures because manufacturers will require consistency in their products.
"Our statistical model will be useful when the nanomaterials industry scales up from laboratory production because industrial users cannot afford to make a detailed study of every production run," said C. F. Jeff Wu, a professor in the Stewart School of Industrial and Systems Engineering at the Georgia Institute of Technology. "The significant experimental errors can be filtered out automatically, which means this could be used in a manufacturing environment."
Sponsored by the National Science Foundation, the research was reported June 25, 2009 in the early edition of the journal Proceedings of the National Academy of Sciences. The paper is believed to be the first to describe the use of statistical techniques for quantitative analysis of data from nanomechanical measurements.
Nanotechnology researchers have long been troubled by the difficulty of measuring nanoscale properties and separating signals from noise and data artifacts. Data artifacts can be caused by such issues as the slippage of structures being studied, surface irregularities and inaccurate placement of the atomic force microscope tip onto samples.
In measuring the effects of extremely small forces acting on extremely small structures, signals of interest may be only two or three times stronger than experimental noise. That can make it difficult to draw conclusions, and potentially masks other interesting effects.
"In the past, we have really not known the statistical reliability of the data at this size scale," said Zhong Lin Wang, a Regents' professor in Georgia Tech's School of Materials Science and Engineering. "At the nanoscale, small errors are amplified. This new technique applies statistical theory to identify and analyze the data received from nanomechanics so we can be more confident of how reliable it is."
In developing the new technique, the researchers studied a data set measuring the deformation of zinc oxide nanobelts, research undertaken to determine the material's elastic modulus. Theoretically, applying force to a nanobelt with the tip of an atomic force microscope should produce consistent linear deformation, but the experimental data didn't always show that.
In some cases, less force appeared to create more deformation, and the deformation curve was not symmetrical. Wang's research team attempted to apply simple data-correction techniques, but was not satisfied with the results.
"The measurements they had done simply didn't match what was expected with the theoretical model," explained Wu, who holds a Coca-Cola chair in engineering statistics. "The curves should have been symmetric. To address this issue, we developed a new modeling technique that uses the data itself to filter out the mismatch step-by-step using the regression technique."
Ideally, researchers would search out and correct the experimental causes of these data errors, but because they occur at such small size scales, that would be difficult, noted V. Roshan Joseph, an associate professor in the Georgia Tech School of Industrial and Systems Engineering.
"Physics-based models are based on several assumptions that can go wrong in reality," he said. "We could try to identify all the sources of error and correct them, but that is very time-consuming. Statistical techniques can more easily correct the errors, so this process is more geared toward industrial use."
Beyond correcting the errors, the improved precision of the statistical technique could reduce the effort required to produce reliable experimental data on the properties of nanostructures. "With half of the experimental efforts, you can get about the same standard deviation as following the earlier method without the corrections," Wu said. "This translates into fewer time-consuming experiments to confirm the properties."
For the future, the research team – which includes Xinwei Deng and Wenjie Mai in addition to those already mentioned – plans to analyze the properties of nanowires, which are critical to the operation of a family of nanoscale electric generators being developed by Wang's research team. Correcting for data errors in these structures will require development of a separate model using the same SPAR techniques, Wu said.
Ultimately, SPAR may lead researchers to new fundamental explanations of the nanoscale world.
"One of the key issues today in nanotechnology is whether the existing physical theories can still be applied to explain the phenomena we are seeing," said Wang, who is also director of Georgia Tech's Center for Nanostructure Characterization and Fabrication. "We have tried to answer the question of whether we are truly observing new phenomena, or whether our errors are so large that we cannot see that the theory still works."
Wang plans to use the SPAR technique on future work, and to analyze past research for potential new findings. "What may have seemed like noise could actually be an important signal," he said. "This technique provides a truly new tool for data mining and analysis in nanotechnology."
Adapted from materials provided by Georgia Institute of Technology.

Unexpectedly Long-range Effects In Advanced Magnetic Devices

ScienceDaily (July 1, 2009) — A tiny grid pattern has led materials scientists at the National Institute of Standards and Technology (NIST) and the Institute of Solid State Physics in Russia to an unexpected finding—the surprisingly strong and long-range effects of certain electromagnetic nanostructures used in data storage.
Their recently reported findings may add new scientific challenges to the design and manufacture of future ultra-high density data storage devices.
The team was studying the behavior of nanoscale structures that sandwich thin layers of materials with differing magnetic properties. In the past few decades such structures have been the subjects of intense research because they can have unusual and valuable magnetic properties. The data read heads on modern high-density disk drives usually exploit a version of the giant magnetoresistance (GMR) effect, which uses such layered structures for extremely sensitive magnetic field detectors.
Arrays of nanoscale sandwiches of a similar type might be used in future data storage devices that would outdo even today's astonishingly capacious microdrives because in principle the structures could be made even smaller than the minimum practical size for the magnetic islands that record data on hard disk drives, according to NIST metallurgist Robert Shull.
The key trick is to cover a thin layer of a ferromagnetic material, in which the magnetic direction of electrons, or "spins," tend to order themselves in the same direction, with an antiferromagnetic layer in which the spins tend to orient in opposite directions. By itself, the ferromagnetic layer will tend to magnetize in the direction of an externally imposed magnetic field—and just as easily magnetize in the opposite direction if the external field is reversed. For reasons that are still debated, the presence of the antiferromagnetic layer changes this. It biases the ferromagnet in one preferred direction, essentially pinning its field in that orientation. In a magnetoresistance read head, for example, this pinned layer serves as a reference direction that the sensor uses in detecting changing field directions on the disk that it is "reading.".
Researchers have long understood this pinning effect to be a short-range phenomenon. The influence of the antiferromagnetic layer is felt only a few tens of nanometers down into the ferromagnetic layer—verticallly. But what about sideways? To find out, the NIST/ISSP team started with a thin ferromagnetic film covering a silicon wafer and then added on top a grid of antiferromagnetic strips about 10 nanometers thick and 10 micrometers wide, separated by gaps of about 100 micrometers. Using an instrument that provided real-time images of the magnetization within grid the structure, the team watched the grid structure as they increased and decreased the magnetic field surrounding it.
What they found surprised them.
As expected, the ferromagnetic material directly under the grid lines showed the pinning effect, but, quite unexpectedly, so did the uncovered material in regions between the grid lines far removed from the antiferromagnetic material. "This pinning effect extends for maybe tens of nanometers down into the ferromagnet right underneath," explains Shull, "so you might expect that there could be some residual effect maybe tens of nanometers away from it to the sides. But you wouldn't expect it to extend 10 micrometers away—that's 10 thousand nanometers." In fact, the effect extends to regions 50 micrometers away from the closest antiferromagnetic strip, at least 1,000 times further than was previously known to be possible.
The ramifications, says Shull, are that engineers planning to build dense arrays of these structures onto a chip for high-performance memory or sensor devices will find interesting new scientific issues for investigation in optimizing how closely they can be packed without interfering with each other.
Journal reference:
Kabanov et al. Unexpectedly long-range influence on thin-film magnetization reversal of a ferromagnet by a rectangular array of FeMn pinning films. Physical Review B, 2009; 79 (14): 144435 DOI: 10.1103/PhysRevB.79.144435
Adapted from materials provided by National Institute of Standards and Technology.

Quantum Communications One Step Closer: Novel Ion Trap For Sensing Force And Light Developed

SOURCE

ScienceDaily (July 1, 2009) — Miniature devices for trapping ions (electrically charged atoms) are common components in atomic clocks and quantum computing research. Now, a novel ion trap geometry demonstrated at the National Institute of Standards and Technology (NIST) could usher in a new generation of applications because the device holds promise as a stylus for sensing very small forces or as an interface for efficient transfer of individual light particles for quantum communications.
The "stylus trap," built by physicists from NIST and Germany's University of Erlangen-Nuremberg, is described in Nature Physics. It uses fairly standard techniques to cool ions with laser light and trap them with electromagnetic fields. But whereas in conventional ion traps, the ions are surrounded by the trapping electrodes, in the stylus trap a single ion is captured above the tip of a set of steel electrodes, forming a point-like probe. The open trap geometry allows unprecedented access to the trapped ion, and the electrodes can be maneuvered close to surfaces. The researchers theoretically modeled and then built several different versions of the trap and characterized them using single magnesium ions.
The new trap, if used to measure forces with the ion as a stylus probe tip, is about one million times more sensitive than an atomic force microscope using a cantilever as a sensor because the ion is lighter in mass and reacts more strongly to small forces. In addition, ions offer combined sensitivity to both electric and magnetic fields or other force fields, producing a more versatile sensor than, for example, neutral atoms or quantum dots. By either scanning the ion trap near a surface or moving a sample near the trap, a user could map out the near-surface electric and magnetic fields. The ion is extremely sensitive to electric fields oscillating at between approximately 100 kilohertz and 10 megahertz.
The new trap also might be placed in the focus of a parabolic (cone-shaped) mirror so that light beams could be focused directly on the ion. Under the right conditions, single photons, particles of light, could be transferred between an optical fiber and the single ion with close to 95 percent efficiency. Efficient atom-fiber interfaces are crucial in long-distance quantum key cryptography (QKD), the best method known for protecting the privacy of a communications channel. In quantum computing research, fluorescent light emitted by ions could be collected with similar efficiency as a read-out signal. The new trap also could be used to compare heating rates of different electrode surfaces, a rapid approach to investigating a long-standing problem in the design of ion-trap quantum computers.
Research on the stylus trap was supported by the Intelligence Advanced Research Projects Activity.
Journal reference:
R. Maiwald, D. Leibfried, J. Britton, J.C. Bergquist, G. Leuchs, and D.J. Wineland. Stylus ion trap for enhanced access and sensing. Nature Physics, Online June 28
Adapted from materials provided by National Institute of Standards and Technology.

Researchers Unveil Whiskered Robot Rat

SOURCE

ScienceDaily (June 30, 2009) — A team of scientists have developed an innovative robot rat which can seek out and identify objects using its whiskers. The SCRATCHbot robot will be demonstrated this week (1 July 2009) at an international workshop looking at how robots can help us examine the workings of the brain.
Researchers from the Bristol Robotics Lab, (a partnership between the University of the West of England and the University of Bristol) and the University of Sheffield have developed the SCRATCHbot, which is a significant milestone in the pan-european “ICEA” project to develop biologically-inspired artificial intelligence systems. As part of this project Professor Tony Prescott, from the University of Sheffield’s Department of Psychology, is working with the Bristol Robotics Lab to design innovative artificial touch technologies for robots that will also help us understand how the brain controls the movement of the sensory systems.
The new technology has been inspired by the use of touch in the animal kingdom. In nocturnal creatures, or those that inhabit poorly-lit places, this physical sense is widely preferred to vision as a primary means of discovering the world. Rats are especially effective at exploring their environments using their whiskers. They are able to accurately determine the position, shape and texture of objects using precise rhythmic sweeping movements of their whiskers, make rapid accurate decisions about objects, and then use the information to build environmental maps.
Robot designs often rely on vision to identify objects, but this new technology relies solely on sophisticated touch technology, enabling the robot to function in spaces such as dark or smoke-filled rooms, where vision cannot be used.
The new technology has the potential for a number of further applications from using robots underground, under the sea, or in extremely dusty conditions, where vision is often seriously compromised. The technology could also be used for tactile inspection of surfaces, such as materials in the textile industry, or closer to home in domestic products, for example vacuum cleaners that could sense textures for optimal cleaning.
Dr Tony Pipe, (BRL, UWE), says “For a long time, vision has been the biological sensory modality most studied by scientists. But active touch sensing is a key focus for those of us looking at biological systems which have implications for robotics research. Sensory systems such as rats’ whiskers have some particular advantages in this area. In humans, for example, where sensors are at the fingertips, they are more vulnerable to damage and injury than whiskers. Rats have the ability to operate with damaged whiskers and in theory broken whiskers on robots could be easily replaced, without affecting the whole robot and its expensive engineering.
“Future applications for this technology could include using robots underground, under the sea, or in extremely dusty conditions, where vision is often a seriously compromised sensory modality. Here, whisker technology could be used to sense objects and manoeuvre in a difficult environment. In a smoke filled room for example, a robot like this could help with a rescue operation by locating survivors of a fire. This research builds on previous work we have done on whisker sensing.”
Professor Prescott said: “Our project has reached a significant milestone in the development of actively-controlled, whisker-like sensors for intelligent machines. Although touch sensors are already employed in robots, the use of touch as a principal modality has been overlooked until now. By developing these biomimetic robots, we are not just designing novel touch-sensing devices, but also making a real contribution to understanding the biology of tactile sensing.”
Adapted from materials provided by University of the West of England.