According to Stephen Hawking, Stuart Russel, Max Tegmark, and Frank Wilczek in The Independent, artificial intelligence, or AI, as depicted in the new movie Transcendence, is not just something of science fiction. They claim that AI research is progressing rapidly, and its potential down the line is nearly limitless:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.

Apparently, AI technology is being used to develop autonomous weapon systems that could select and eliminate targets (meanwhile, the UN is trying to take measures against this). Could this powerful tool, they ask, even advance to the point of having the ability to improve upon itself? In the end, it all comes down to who controls the machines. The article’s writers discuss the benefits of AI but warn that not enough consideration is being given to the field. So at what point in science do we stop the “because we can, we will” mentality?

Give us your thoughts. Is this just science fiction, or is this something to be taken seriously? What do you think of such advancements in science?

Image Credit: Maitri