A statistical tool can improve 'vision' in robots by helping them better understand the objects in the world around them.
Object recognition is one of the most widely studied problems in computer vision, researchers said.
To improve robots' ability to gauge object orientation, Jared Glover, a graduate student in Massachusetts Institute of Technology's Department of Electrical Engineering and Computer Science, is exploiting a statistical construct called the Bingham distribution.
In a paper to be presented at the International Conference on Intelligent Robots and Systems, Glover and MIT alumna Sanja Popovic, who is now at Google, describes a new robot-vision algorithm, based on the Bingham distribution, that is 15 per cent better than its best competitor at identifying familiar objects in cluttered scenes.
That algorithm, however, is for analysing high-quality visual data in familiar settings.
Because the Bingham distribution is a tool for reasoning probabilistically, it promises even greater advantages in contexts where information is patchy or unreliable.
In cases where visual information is particularly poor, the algorithm offers an improvement of more than 50 per cent over the best alternatives.
"Alignment is key to many problems in robotics, from object-detection and tracking to mapping," Glover said.
"And ambiguity is really the central challenge to getting good alignments in highly cluttered scenes, like inside a refrigerator or in a drawer.
“That's why the Bingham distribution seems to be a useful tool, because it allows the algorithm to get more information out of each ambiguous, local feature," Glover said.
One reason the Bingham distribution is so useful for robot vision is that it provides a way to combine information from different sources, researchers said.
Determining an object's orientation entails trying to superimpose a geometric model of the object over visual data captured by a camera -- in the case of Glover's work, a Microsoft Kinect camera, which captures a 2-D colour image together with information about the distance of the colour patches.
In experiments involving visual data about particularly cluttered scenes -- depicting the kinds of environments in which a household robot would operate -- Glover's algorithm had about the same false-positive rate as the best existing algorithm: About 84 per cent of its object identifications were correct, versus 83 per cent for the competition.
But it was able to identify a significantly higher percentage of the objects in the scenes -- 73 per cent versus 64 per cent.
Image: A robot
The image is used for representational purpose only