New York: Imagine letting a robot clean your house while you are at work, or to clear your tables. That’s exactly what the novel robot developed by researchers at the MIT can do. The team at the MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), developed a novel system which lets robots inspect random objects, and visually understand them enough to accomplish specific tasks without ever having seen them before.
The system, dubbed “Dense Object Nets” (DON), looks at objects as collections of points that serve as “visual roadmaps” of sorts. This approach lets robots better understand and manipulate items and allows them to even pick up a specific object among a clutter of similar objects — a valuable skill for the kinds of machines that companies like Amazon and Walmart use in their warehouses, the researchers said.
“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said Lucas Manuelli, doctoral student at the CSAIL. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side,” Manuelli added.
The DON system, essentially creates a series of coordinates on a given object, which serve as a kind of “visual roadmap” of the objects, to give the robot a better understanding of what it needs to grasp, and where. It is “self-supervised” and does not require any human annotations. In the study, one set of tests done on a soft caterpillar toy, a Kuka robotic arm powered by DON could grasp the toy’s right ear from a range of different configurations.
This showed that, among other things, the system has the ability to distinguish left from right on symmetrical objects. “In factories robots often need complex part feeders to work reliably,” Manuelli said. “But a system like this that can understand objects’ orientations could just take a picture and be able to grasp and adjust the object accordingly.” The team will present their paper on the system at forthcoming Conference on Robot Learning in Zurich, Switzerland.