Human interaction is very complex. It takes years to master the mechanics of a basic conversation. When learning how to talk, toddlers have a hard time formulating relevant replies based on their surroundings. Eventually, they get the hang of it through natural exposure and practice. But for droids, this learning method isn’t effective. “Think about kids. They learn a lot through trial and error,” said Osaro founder and CEO Itamar Arel. “They come to understand what maximizes pleasure and minimizes pain.”
For artificial intelligence (AI), one of the best ways to boost their understanding of reality is through video games. At DeepMind, a Google subsidiary based in Cambridge, England, a group of engineers are teaching bots how to play classic games like Space Invaders. The team believes that such mediums are the ideal primer for the real world. A closer look at the vintage titles suggest that there’s more to the mechanics behind the pixelated graphics and one-dimensional controls.
Atari has the makings of a digital platform, allowing AI to understand it better than sifting through a photo album. Even the simple act of looking at pictures needs to be taught to a robot because it doesn’t know what to look for or how to interpret the images. So as you can see, wiring a machine like a human is an incredibly complicated process. Scientists must turn the natural into binary code and algorithms for droids to understand. Engineers at DeepMind hope that by exposing AI to video games, they can better grasp how to build fancy toys, assist the elderly and drive people around autonomously.
Google isn’t alone in the race to build the world’s smartest robot. Osaro, a San Francisco-based startup, is also in the running to develop software that gives bots a heightened sense of perception. Recently, it raised $3.3 million in a funding round with investors from Scott Banister, Jerry Yang’s AME Cloud Ventures, and Peter Thiel. The company’s project was inspired by DeepMind. However, it does not want to be compared with the mainstream system. That’s because the startup claims that it has created something far more superior.
Engineers at Osaro labs have apparently developed AI software that can process information 100 times faster than its competitors. This is a big deal for robots that can’t hack their environment. To achieve this, the team deployed a new AI layer to help its core neural networks function more effectively. The new system is called reinforcement learning. It relies on trial and error to grasp new environments and create calculated responses based on fresh data. Like DeepMind, the startup also used video games to fuel their creations.
“Learning more directly from human demonstrations and advice in all kinds of formats is intuitively the way to get a system to learn more quickly,” said Pieter Abbeel, a professor of computer science at the University of California. “However, developing a system that is able to leverage a wide range of learning modalities is challenging.”