ScienceDaily (July 17, 2009) — A robotic vision system that mimics key visual functions of the human brain promises to let robots manoeuvre quickly and safely through cluttered environments, and to help guide the visually impaired.
It’s something any toddler can do – cross a cluttered room to find a toy.
It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.
Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.
In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.
The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.
“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”
How do we see movement?
The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.
These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.
The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.
One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.
“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”
Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.
“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”
Building a better robotic brain
Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.
They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.
“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”
The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.
Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.
”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”
In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.
Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.
Adapted from materials provided by ICT Results.
It's also one of those seemingly trivial skills that have proved to be extremely hard for computers to master. Analysing shifting and often-ambiguous visual data to detect objects and separate their movement from one’s own has turned out to be an intensely challenging artificial intelligence problem.
Three years ago, researchers at the European-funded research consortium Decisions in Motion (http://www.decisionsinmotion.org/) decided to look to nature for insights into this challenge.
In a rare collaboration, neuro- and cognitive scientists studied how the visual systems of advanced mammals, primates and people work, while computer scientists and roboticists incorporated their findings into neural networks and mobile robots.
The approach paid off. Decisions in Motion has already built and demonstrated a robot that can zip across a crowded room guided only by what it “sees” through its twin video cameras, and are hard at work on a head-mounted system to help visually impaired people get around.
“Until now, the algorithms that have been used are quite slow and their decisions are not reliable enough to be useful,” says project coordinator Mark Greenlee. “Our approach allowed us to build algorithms that can do this on the fly, that can make all these decisions within a few milliseconds using conventional hardware.”
How do we see movement?
The Decisions in Motion researchers used a wide variety of techniques to learn more about how the brain processes visual information, especially information about movement.
These included recording individual neurons and groups of neurons firing in response to movement signals, functional magnetic resonance imaging to track the moment-by-moment interactions between different brain areas as people performed visual tasks, and neuropsychological studies of people with visual processing problems.
The researchers hoped to learn more about how the visual system scans the environment, detects objects, discerns movement, distinguishes between the independent movement of objects and the organism’s own movements, and plans and controls motion towards a goal.
One of their most interesting discoveries was that the primate brain does not just detect and track a moving object; it actually predicts where the object will go.
“When an object moves through a scene, you get a wave of activity as the brain anticipates its trajectory,” says Greenlee. “It’s like feedback signals flowing from the higher areas in the visual cortex back to neurons in the primary visual cortex to give them a sense of what’s coming.”
Greenlee compares what an individual visual neuron sees to looking at the world through a peephole. Researchers have known for a long time that high-level processing is needed to build a coherent picture out of a myriad of those tiny glimpses. What's new is the importance of strong anticipatory feedback for perceiving and processing motion.
“This proved to be quite critical for the Decisions in Motion project,” Greenlee says. “It solves what is called the ‘aperture problem’, the problem of the neurons in the primary visual cortex looking through those little peepholes.”
Building a better robotic brain
Armed with a better understanding of how the human brain deals with movement, the project’s computer scientists and roboticists went to work. Using off-the-shelf hardware, they built a neural network with three levels mimicking the brain’s primary, mid-level, and higher-level visual subsystems.
They used what they had learned about the flow of information between brain regions to control the flow of information within the robotic “brain”.
“It’s basically a neural network with certain biological characteristics,” says Greenlee. “The connectivity is dictated by the numbers we have from our physiological studies.”
The computerised brain controls the behaviour of a wheeled robotic platform supporting a moveable head and eyes, in real time. It directs the head and eyes where to look, tracks its own movement, identifies objects, determines if they are moving independently, and directs the platform to speed up, slow down and turn left or right.
Greenlee and his colleagues were intrigued when the robot found its way to its first target – a teddy bear – just like a person would, speeding by objects that were at a safe distance, but passing nearby obstacles at a slower pace.
”That was very exciting,” Greenlee says. “We didn’t program it in – it popped out of the algorithm.”
In addition to improved guidance systems for robots, the consortium envisions a lightweight system that could be worn like eyeglasses by visually or cognitively impaired people to boost their mobility. One of the consortium partners, Cambridge Research Systems, is developing a commercial version of this, called VisGuide.
Decisions in Motion received funding from the ICT strand of the EU’s Sixth Framework Programme for research. The project’s work was featured in a video by the New Scientist in February this year.
Adapted from materials provided by ICT Results.
No comments:
Post a Comment