There is an undercurrent of fear in sci-fi when it comes to artificial intelligence. AI is both a testament to the power of our own minds, as well as a reminder of our shortcomings. Artificial intelligence doesn’t suffer from frailty of body as humans do. It is also frequently envisioned as lacking emotion, such as Data in Star Trek. This frightens us, as we do not know what an emotionless intelligence may be capable of doing. For example, we see Skynet become self-aware in the Terminator franchise, and the results are devastating for humanity.
Isaac Asimov was a prominent science fiction writer, and he is perhaps best remembered for writing I, Robot and formulating the Three Laws of Robotics (which were introduced in 1942 short story “Runaround”).
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The Zeroth Law was also later added.
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These laws establish parameters for how we would like any potential artificial intelligences to relate to us. Knowing how powerful AI could possibly be, we want to make sure it won’t harm us. We fear a loss of control when it comes to that which we create to serve our own needs.
|Image courtesy of Victor Habbick/|
There are a few questions that have always intrigued me about AI, and I hope they trigger some discussion.
- Could AI become so complex that we no longer recognize it as our own creation?
- Which would we actually find more intimidating: an emotionless artificial intelligence, or an AI that has grown so complex that it becomes practically indistinguishable from us in an emotional sense?
- If AI were to become so sophisticated that it has genuine emotion, is it unethical to force it to adhere to the laws of robotics?
- What responsibility do we hold toward an AI that we create? What responsibility to we hope it would feel toward us? How can we negotiate any conflicts that might arise?
Science fiction will surely continue to deal with these questions, and many more. What questions does the existence of artificial intelligence raise for you?