There is an undercurrent
of fear in sci-fi when it comes to artificial intelligence. AI is both a testament to the power of our
own minds, as well as a reminder of our shortcomings. Artificial intelligence doesn’t suffer from
frailty of body as humans do. It is also
frequently envisioned as lacking emotion, such as Data in Star Trek. This frightens
us, as we do not know what an emotionless intelligence may be capable of
doing. For example, we see Skynet become
self-aware in the Terminator franchise, and the results are devastating for
humanity.
Isaac Asimov was a prominent
science fiction writer, and he is perhaps best remembered for writing I, Robot and formulating the Three Laws
of Robotics (which were introduced in 1942 short story “Runaround”).
1. A robot may not injure a human being
or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to
it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence
as long as such protection does not conflict with the First or Second Law.
The
Zeroth Law was also later added.
0. A robot may not harm humanity, or, by inaction, allow humanity
to come to harm.
These laws
establish parameters for how we would like any potential artificial
intelligences to relate to us. Knowing
how powerful AI could possibly be, we want to make sure it won’t harm us. We fear a loss of control when it comes to
that which we create to serve our own needs.
Image courtesy of Victor Habbick/ FreeDigitalPhotos.net |
There are a few
questions that have always intrigued me about AI, and I hope they trigger some
discussion.
- Could AI become so complex that we no longer recognize it as our own creation?
- Which would we actually find more intimidating: an emotionless artificial intelligence, or an AI that has grown so complex that it becomes practically indistinguishable from us in an emotional sense?
- If AI were to become so sophisticated that it has genuine emotion, is it unethical to force it to adhere to the laws of robotics?
- What responsibility do we hold toward an AI that we create? What responsibility to we hope it would feel toward us? How can we negotiate any conflicts that might arise?
Science
fiction will surely continue to deal with these questions, and many more. What questions does the existence of artificial
intelligence raise for you?
I am a novice when it comes to science fiction but AI is a theme that is interesting to me and I had heard of the laws before. I always thought that the fear of AI came from the feared acquisition of emotions by AI beings or what may be perceived as human emotions. This would then confuse people who would then end up being divided in trying to understand what is morally right.
ReplyDeleteIf an emotionless entity does something wrong it would be the programers mistake wouldn't it? But if it carried out an act based on an emotion, wouldn't that give it more freedom of will? Emotions are less controllable after all... And that is where the fear lies.
Great post by the way! Can't wait to see what I'll learn from reading your posts!
Well written questions.. Curiously, with AI the questions are more ethical rather than technical when we think of the implications. Interesting! The third question was the most interesting and thought-provoking
ReplyDeleteGS at Moontime Tunes
Did you ever see Colossus: The Forbin Project? It was an early (1970) movie about AI taking over the world. I saw it as a kid one Saturday afternoon and it freaked me out.
ReplyDeleteI think humans need to put emotions on everything, pets, machines, cars
ReplyDeleteI'm so pleased I'm doing A to Z - I would have never have found you without it
http://aimingforapublishingdeal.blogspot.co.uk/
Twitter: WriterBizWoman
Great to connect
Hello fellow blogger, I think your questions open up a can of worms, which of course they are meant to, almost makes me shivver though thinking of them. Can you imagine a robot that is so emotionally advanced and intelligent that we as humans form attachments, which is ultimately what will happen because we are humans, they of course I think would have the upperhand, dont you? It would be a dangerous world
ReplyDeleteThat's what made the Asimov "I, Robot" series so powerful. The laws seem simple, but are more complex when you actually think about them.
ReplyDeleteLaws of nature no longer apply. The imagination alone can understand the limits of robotics. I guess the scarier thing is that we have arrived at that point.
ReplyDeleteFantastic! I always enjoy your A to Z posts. This year looks like it will be no exception.
ReplyDelete--
Timothy S. Brannan
The Other Side, April Blog Challenge: The A to Z of Witches
I think AI with emotions would be both cool and scary at the same time.
ReplyDeletehttp://theramblingsofcharliebrown.blogspot.com
I think humans have an inherent need to project our emotions into and onto non human things to make it easier for us to relate to them. When AI would become so advanced that they take on their own characteristics and emotions is where the laws get fuzzy. Where is then the line that would divide humans and humanity from AI? Moreover, what would specifically define someone/something as human or AI?
ReplyDeleteKris
wtfwidow.blogsplot.com
Emotions by themselves are neither good or bad but are fickle & flighty. I'd rather it have virtue.
ReplyDeleteThese questions are all touched on in Ridley Scott's Blade Runner (or the Philip K. Dick story, "Do Androids Dream of Electric Sheep?, from whence it came). Things such as the film's replicants not even knowing they are AI, or how Deckard may himself be AI. AI's garnering emotions and the like.
ReplyDeleteThese questions are part of why Blade Runner is one of my favourite movies. Do these AI deserve life and free will? I do not necessarily have any answers to these quandaries, but I thought I would chime in anyway. I have opinions. Yes, I think that if AI were to "grow" human emotions, it should have free will to do what it wishes. Of course, then it is no longer an it, but rather a he or a she. But maybe that's just me.
Anyhoo, just wanted to stop by here on A to Z Day One, and say hiya. So...hiya.
See ya 'round the web. All Things Kevyn
Some interesting questions to provoke thought. In regards to question 3 (If AI were to become so sophisticated that it has genuine emotion, is it unethical to force it to adhere to the laws of robotics?)--The question itself begets more questions. Does this hypothetical AI have a full range of emotion, or is it limited? If bound by the laws of robotics in its core programming, then one could argue that the AI could never experience a full range of emotions, and the ethical dilemma would not be of concern. But if we look at the laws in a more judicial sense, meaning that AIs could be prosecuted if they broke them, then would they not just be in the same boat as humans? Able to choose whether or not to follow said laws?
ReplyDeleteHrm... much to ponder and explore.
Good questions. What happens if....can be applied to so many events/creations.
ReplyDeleteI absolutely love your post, because science, AI, and Asimov are some of my favorite things. I also love sci-fi, and even wrote today about a show that breaks all of the Asimov rules... making for a very scary world indeed! :)
ReplyDeleteRandom Musings from the KristenHead — A is for 'Almost Human' (and Action and Androids)
There aren't any questions that Artificial Intelligence raises for me at this time, but after reading this post, I now want to watch "I Robot" (the Will Smith movie) and the "Minority Report" (the Tom Cruise movie) again.
ReplyDelete~Nicole
#atozchallenge Co-Host
The Madlab Post
I only dabble in sci-fi once in a while and have never heard of the laws of robotics. AI kind of scares me.
ReplyDeleteKC @ The Occasional Adventures of a Hermit
I love technology but I prefer an off switch. Great post!
ReplyDeleteThose are very thought-provoking questions - I've never really thought too much about AI or what it could mean, but I think because of all the sci-fi movies I've seen, I'd be against creating beings with AI. But you never know what science will come up with next.
ReplyDeleteThis post reminds me of the many topics brought up in Battlestar Gallactica. If anyone is interested in artificial intelligence and is a sci-fi fan...go watch BSG...you will not regret it!
ReplyDelete~Katie
www.thecyborgmom.blogspot.com