Robots performing tasks according to a predetermined set of instructions are nothing new. But robots capable of learning how to do something by watching videos are a completely different approach, pursued by a group of researchers at the University of Maryland.
If you have posted a video of your favorite recipe on the popular web site YouTube, chances are that it will be tried by a robot.
University of Maryland professor Yiannis Aloimonos, who leads a team that is trying to teach a robot how to reproduce simple tasks by watching videos, said "there exists a gargantuan amount of video information on the Internet that we can capitalize on and use our robots in order to learn.”
At the moment, the videos are fed electronically, said research scientist Cornelia Fermuller:
"Originally, actually, we took our own videos, so our own cameras looking at ourselves doing the cooking," she explained. "And, as [the robot] advance[d], it moved on to good quality videos. And it will move on to even lesser quality home-made videos.”
So far, the robot named Julia can pour ingredients, add dressing and stir a simple salad. It learns by breaking each task into basic components, such as grasping a spoon, bringing it to the bowl, stirring the salad and observing the results.
As Julia can see, hear and remember things and then combine those components to execute certain tasks, Aloimonos said the project amounts to integration of all those capabilities.
“How do we demonstrate this capability? By basically developing software that utilizes the perception, things that you see or hear, the knowledge that you have somehow in the computer, organized appropriately and the motor capabilities, the abilities that you have to move your hands and your fingers and affect the world,” said Aloimonos.
But why teach a robot how to understand a video when it can easily follow a fixed program? Aloimonos said predetermined instructions lack flexibility.
“You have a system that is done just for this particular task. And so, it can’t generalize," he said. "You cannot take it and put it into a different environment. It is not flexible.”
Aloimonos said one of the problems the researchers are trying to solve is how to make the robot understand and utilize what it learns during the performance of a certain task, the so-called feedback. Another problem, he says, will be the introduction of language.
“I believe it will take quite some time before the robots are able to understand metaphorical language like the unbearable lightness of being,” he added.
But, he said, we don’t need that in order to create a new world in which the robots will be working for us.