Elon Musk, a highly successful entrepreneur, thinks artificial intelligence (AI) is scary. His fears are well justified, considering that new technology is being developed exponentially faster than government policies and regulations.
To ensure AI is kept on a tight leash, Musk is investing $10 million in various high-level projects that are fixated on preserving the use and application of advanced, complex technology.
“Building advanced AI is like launching a rocket,” said Jaan Tallinn, one of the founders of the Future of Life Institute, in a statement. “The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering.”
From Economics to Human Interaction
According to the Future of Life Institute , below are some notable research projects that were approved to receive funding:
- Three projects developing techniques for AI systems to learn what humans prefer from observing our behavior, including projects at UC Berkeley and Oxford University
- A project by Benja Fallenstein at the Machine Intelligence Research Institute on how to keep the interests of superintelligent systems aligned with human values
- A project lead by Manuela Veloso from Carnegie Mellon University on making AI systems explain their decisions to humans
- A study by Michael Webb of Stanford University on how to keep the economic impacts of AI beneficial
- A project headed by Heather Roff studying how to keep AI-driven weapons under “meaningful human control”
- A new Oxford-Cambridge research center for studying AI-relevant policy
Over 300 applicants submitted an entry for the grants. At this stage, funding for such projects is difficult to acquire. Most investors are more interested in mainstream initiatives with sizable returns. For now, leaders pushing the AI frontier include the following establishments: Facebook, IBM, Microsoft and Google’s DeepMind Technologies.
Prioritizing Humans over Robots
In order to prevent AI from doing the unthinkable, the technology has to be addressed from multiple angles. For example, the development of algorithms that limits AI from harming humans is a direct approach to the issue; but such projects need to be coupled with educational initiatives to help people understand how to interact with highly developed machines.
Because the technology is applicable in virtually every industry, it is also important to ensure proper legal guidelines are in place to discourage people from using it for destructive purposes.
One organization cannot take on all of the issues surrounding the development of massively sophisticated systems. A network of researchers, humanitarians, policy makers and scientists are needed to preserve the innovation of AI.
“In its early days, AI research focused on the ‘known knowns’ by working on problems such as chess and blocks world planning, where everything about the world was known exactly. Starting in the 1980s, AI research began studying the ‘known unknowns’ by using probability distributions to represent and quantify the likelihood of alternative possible worlds,” explained Tom Dietterich, president of the Association of the Advancement of Artificial Intelligence.
“The FLI grant will launch work on the ‘unknown unknowns’: How can an AI system behave carefully and conservatively in a world populated by unknown unknowns — aspects that the designers of the AI system have not anticipated at all?”