To top
Share on FacebookTweet about this on TwitterPin on PinterestShare on Google+Share on LinkedInEmail this to someone

In the latest edition of Marvel’s The Avengers franchise of blockbuster films, the series introduces Ultron, arguably the Avengers’ fiercest enemy. In a lengthy article chronicling Ultron’s villainous history, Vox’s Alex Abad-Santos details how Ultron embodies our fears regarding artificial intelligence and what happens when machines become more intelligent than humans.

This tipping point where machines stop relying on humans, become self-sufficient, and overtake human intelligence is known as ‘the singularity.’. Popular films like The Matrix, Terminator, and Blade Runner, all explore a futuristic scenario where humans are enslaved by machines, and are seen as mere resources. It’s a scary world indeed, and with machines replacing an increasing number of previously human tasks, it becomes clear that it’s only a matter of time before Skynet takes over.

The concept of the singularity has been thrown around as a concept since the late 18th century, and more recently proponents like futurist Ray Kurzweil have predicted that with the rate of technological improvement and computers’ ever-increasing computing power, we’re only a couple of decades away from this catastrophic event.

Some experts refute these claims saying that it’s very difficult to predict, as human intelligence isn’t just about the ability to string together billions of ones and zeroes in the fraction of a second, it includes creativity, the ability to think laterally, and also emotional intelligence. In fact the debate around artificial intelligence is as much about philosophy as it is about technology. It’s one thing to create computational power, but what about morals, creativity and sound judgement – how do you synthesize a soul?

However, these are all human traits concerning matters that involve humans, and aren’t necessarily key to the singularity. The concept of the singularity is based on self-reliance, self-improvement, and superior intelligence of machines – it’s more about them (machines), than us. If the machines don’t need us anymore, why would they need to have these human qualities?

Despite this, using advancement in computational power to predict the timing of the singularity is naïve, says forensic scientist Charles Brown* who holds a PHD in computer science. In his opinion most research into so-called Artificial General Intelligence (AGI) – an intelligence system that is capable of solving generic problems such as optimising itself, which lies at the heart of runaway AI capabilities – focusses on creating a simulation of a human brain.

Brown uses the example of heavier-than-air flight: Birds are heavier than air, and birds fly, so it would make sense to assume that if you learn everything about birds, and you have the technological know-how to build a mechanical bird, doesn’t mean you’re going to achieve your goal of flight. People have been studying birds for centuries, but only after the invention of the internal combustion engine – something totally unrelated to birds – was heavier-than-air flight made possible.

“It’s sort of a best-case worst-case estimate,” says Brown. “It’s worst-case because we ignore all other possible ways to get to AGI and instead focus on replicating an existing AGI system, the human brain. It’s best-case because we assume we understand the processes that need to be simulated in order to replicate the function of the brain, and we assume the current exponential rate of technological improvement of computing resources will hold over a rather long period of time.”

With this in mind, Brown thinks AGI is certainly possible, but he’s not convinced that AGI will inevitably lead to runaway AI, and the singularity. “I think it massively overestimates the importance of intelligence in human progress, and underestimates the dull, gruelling work that is the science/technology feedback cycle,” says Brown.

In a paper presented at the 2012 Singularity Summit, researchers Dr Stewart Armstrong and Kaj Satola from Oxford University’s Future of Humanity Institute studied the predictions of experts and non-experts, and found strong evidence to suggest that predictions by both these groups are increasingly uncertain.

So how far are we from the singularity? No one really knows.

*name changed

Leave a Reply

We are on Instagram