Tuesday, April 1, 2014

A is for Asimov and Artificial Intelligence



There is an undercurrent of fear in sci-fi when it comes to artificial intelligence.  AI is both a testament to the power of our own minds, as well as a reminder of our shortcomings.  Artificial intelligence doesn’t suffer from frailty of body as humans do.  It is also frequently envisioned as lacking emotion, such as Data in Star Trek.  This frightens us, as we do not know what an emotionless intelligence may be capable of doing.  For example, we see Skynet become self-aware in the Terminator franchise, and the results are devastating for humanity.

Isaac Asimov was a prominent science fiction writer, and he is perhaps best remembered for writing I, Robot and formulating the Three Laws of Robotics (which were introduced in 1942 short story “Runaround”).

1.       A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.      A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.      A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Zeroth Law was also later added.

0.      A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws establish parameters for how we would like any potential artificial intelligences to relate to us.  Knowing how powerful AI could possibly be, we want to make sure it won’t harm us.  We fear a loss of control when it comes to that which we create to serve our own needs.

Image courtesy of Victor Habbick/
FreeDigitalPhotos.net

There are a few questions that have always intrigued me about AI, and I hope they trigger some discussion.

  1. Could AI become so complex that we no longer recognize it as our own creation?
  2. Which would we actually find more intimidating: an emotionless artificial intelligence, or an AI that has grown so complex that it becomes practically indistinguishable from us in an emotional sense?
  3. If AI were to become so sophisticated that it has genuine emotion, is it unethical to force it to adhere to the laws of robotics?
  4. What responsibility do we hold toward an AI that we create?  What responsibility to we hope it would feel toward us?  How can we negotiate any conflicts that might arise?

Science fiction will surely continue to deal with these questions, and many more.  What questions does the existence of artificial intelligence raise for you? 


20 comments:

  1. I am a novice when it comes to science fiction but AI is a theme that is interesting to me and I had heard of the laws before. I always thought that the fear of AI came from the feared acquisition of emotions by AI beings or what may be perceived as human emotions. This would then confuse people who would then end up being divided in trying to understand what is morally right.

    If an emotionless entity does something wrong it would be the programers mistake wouldn't it? But if it carried out an act based on an emotion, wouldn't that give it more freedom of will? Emotions are less controllable after all... And that is where the fear lies.

    Great post by the way! Can't wait to see what I'll learn from reading your posts!

    ReplyDelete
  2. Well written questions.. Curiously, with AI the questions are more ethical rather than technical when we think of the implications. Interesting! The third question was the most interesting and thought-provoking

    GS at Moontime Tunes

    ReplyDelete
  3. Did you ever see Colossus: The Forbin Project? It was an early (1970) movie about AI taking over the world. I saw it as a kid one Saturday afternoon and it freaked me out.

    ReplyDelete
  4. I think humans need to put emotions on everything, pets, machines, cars
    I'm so pleased I'm doing A to Z - I would have never have found you without it
    http://aimingforapublishingdeal.blogspot.co.uk/
    Twitter: WriterBizWoman
    Great to connect

    ReplyDelete
  5. Hello fellow blogger, I think your questions open up a can of worms, which of course they are meant to, almost makes me shivver though thinking of them. Can you imagine a robot that is so emotionally advanced and intelligent that we as humans form attachments, which is ultimately what will happen because we are humans, they of course I think would have the upperhand, dont you? It would be a dangerous world

    ReplyDelete
  6. That's what made the Asimov "I, Robot" series so powerful. The laws seem simple, but are more complex when you actually think about them.

    ReplyDelete
  7. Laws of nature no longer apply. The imagination alone can understand the limits of robotics. I guess the scarier thing is that we have arrived at that point.

    ReplyDelete
  8. Fantastic! I always enjoy your A to Z posts. This year looks like it will be no exception.

    --
    Timothy S. Brannan
    The Other Side, April Blog Challenge: The A to Z of Witches

    ReplyDelete
  9. I think AI with emotions would be both cool and scary at the same time.

    http://theramblingsofcharliebrown.blogspot.com

    ReplyDelete
  10. I think humans have an inherent need to project our emotions into and onto non human things to make it easier for us to relate to them. When AI would become so advanced that they take on their own characteristics and emotions is where the laws get fuzzy. Where is then the line that would divide humans and humanity from AI? Moreover, what would specifically define someone/something as human or AI?

    Kris
    wtfwidow.blogsplot.com

    ReplyDelete
  11. Emotions by themselves are neither good or bad but are fickle & flighty. I'd rather it have virtue.

    ReplyDelete
  12. These questions are all touched on in Ridley Scott's Blade Runner (or the Philip K. Dick story, "Do Androids Dream of Electric Sheep?, from whence it came). Things such as the film's replicants not even knowing they are AI, or how Deckard may himself be AI. AI's garnering emotions and the like.

    These questions are part of why Blade Runner is one of my favourite movies. Do these AI deserve life and free will? I do not necessarily have any answers to these quandaries, but I thought I would chime in anyway. I have opinions. Yes, I think that if AI were to "grow" human emotions, it should have free will to do what it wishes. Of course, then it is no longer an it, but rather a he or a she. But maybe that's just me.

    Anyhoo, just wanted to stop by here on A to Z Day One, and say hiya. So...hiya.

    See ya 'round the web. All Things Kevyn

    ReplyDelete
  13. Some interesting questions to provoke thought. In regards to question 3 (If AI were to become so sophisticated that it has genuine emotion, is it unethical to force it to adhere to the laws of robotics?)--The question itself begets more questions. Does this hypothetical AI have a full range of emotion, or is it limited? If bound by the laws of robotics in its core programming, then one could argue that the AI could never experience a full range of emotions, and the ethical dilemma would not be of concern. But if we look at the laws in a more judicial sense, meaning that AIs could be prosecuted if they broke them, then would they not just be in the same boat as humans? Able to choose whether or not to follow said laws?

    Hrm... much to ponder and explore.

    ReplyDelete
  14. Good questions. What happens if....can be applied to so many events/creations.

    ReplyDelete
  15. I absolutely love your post, because science, AI, and Asimov are some of my favorite things. I also love sci-fi, and even wrote today about a show that breaks all of the Asimov rules... making for a very scary world indeed! :)

    Random Musings from the KristenHead — A is for 'Almost Human' (and Action and Androids)

    ReplyDelete
  16. There aren't any questions that Artificial Intelligence raises for me at this time, but after reading this post, I now want to watch "I Robot" (the Will Smith movie) and the "Minority Report" (the Tom Cruise movie) again.

    ~Nicole
    #atozchallenge Co-Host
    The Madlab Post

    ReplyDelete
  17. I only dabble in sci-fi once in a while and have never heard of the laws of robotics. AI kind of scares me.

    KC @ The Occasional Adventures of a Hermit

    ReplyDelete
  18. I love technology but I prefer an off switch. Great post!

    ReplyDelete
  19. Those are very thought-provoking questions - I've never really thought too much about AI or what it could mean, but I think because of all the sci-fi movies I've seen, I'd be against creating beings with AI. But you never know what science will come up with next.

    ReplyDelete
  20. This post reminds me of the many topics brought up in Battlestar Gallactica. If anyone is interested in artificial intelligence and is a sci-fi fan...go watch BSG...you will not regret it!
    ~Katie
    www.thecyborgmom.blogspot.com

    ReplyDelete