In 1942 the science fiction author Isaac Asimov came up with The 3 Laws of Robotics to protect humankind from the robots. These 3 laws were called Asimov’s Laws of robotics. They were basically guidelines that robots had to follow, in order to make sure that they would not become a threat to humanity in the future.
He was speaking of androids when he devised the Three Laws, he imagined a world where these human-like robots would behave like servants and would need a set of programming rules.
But there have been substantial technical developments in the 79 years since the release of the first story to feature his ethical guidelines. We now have a very different understanding of what robots could look like and how we’re going to communicate with them.
We have ample knowledge of the dangers and disruptive capabilities of artificial intelligence and how much risk they pose to human life on Earth. Thanks to the Matrix, Skynet, and Ultron we know exactly how things can go wrong. But Is it that simple? Can Artificial Intelligence in the future really rule the world and enslave people?
Well, the answer is very complicated, there are a lot of thoughts out there on either side of the argument. To tackle this problem, Firstly we have to know the types of AI that are prevalent today in our world.
Artificial limited intelligence is about programming systems with advanced algorithms, decision-making, neural networks, and deep learning to autonomously meet specific needs. This is the current state of AI science. We can program machines more effectively than humans for specific tasks and can save time, money, and other scarce resources. They help us achieve performance, decrease costs, enhance quality and execution.
These kinds of ANI pose no threat to us right now as there are basically robots designed for a specific task like your smart fridge or TV. So the worse we have to fear is just a really cold drink or maybe an unwanted personalized ad or two.
Now we are in the risky domain because this is where AI develops consciousness. We call it a risk because we do not know what may happen if AI becomes conscious. Interestingly, we cannot even describe what consciousness is, and thus science calls this a tough problem.
So how can we fight something we can’t even define our self. The closest we got to define the consciousness of an AI is the Turing Test.
A Turing Test is a method of inquiry in artificial intelligence (AI) for determining whether or not a computer is capable of thinking like a human being. The test is named after Alan Turing, an English computer scientist, cryptanalyst, mathematician, and theoretical biologist.
But even the turning test is not a reliable assessment of conciseness as we ourselves can’t define the conciseness.
Imagine that AI miraculously gains consciousness and then learns everything very rapidly, like a sponge swallowing all the water in the earth’s oceans. It will possess all the information and, like its master, will be able to control all technology. And beyond that, the super-intelligence of ASI, further driven by consciousness, can discover all that science can find, all that philosophy can give, and all that religion can tell. It will hit the technical limits of understanding.
Wait! Don’t panic yet. Although all of this might sound scary, most research also suggests that this is not going to happen any time soon. Besides we are not fully aware of all the possibilities of an ASI. We are nowhere close to a complex neural network with such high processing power to truly grow into an ASI.
On the bright side what if the supercomputer thinks that the human race needs help and decides to help us out if that happens we would see the biggest growth in the history of the human race. We would have our own superman. In my opinion that would be awesome.
So to sum it up Maybe technology might grow more advanced than we can handle. but Human experience has shown that change cannot be halted. We maybe might live in an entirely different world surrounded by thought machines in the near or distant future.
Researchers and experts agree that we need to address the possible risks of such a development to be able to set limits just in case. At the same time, to encourage AI development, we will need an effective partnership between science and morals. If we can consider these two aspects, we will likely have the opportunity sooner or later to solve the rising issues.
Who knows maybe we might have to surrender in front of our machine overlords just like how the Neanderthals did before the Homo sapiens, or maybe we might get our own super being with limitless knowledge. We will just have to wait and see.