Week 1 Movie – iRobot

When I saw we were asked to watch an AI movie the first week and write a post on it I knew exactly which one I was going to watch. Without even looking at the list I decided to watch iRobot. It has been one of my favorite movies since it came out and I tend to watch it once or twice a year. It covers all the standard questions about the ethics, safety, and practicality of robotics and AI.

The movie starts by displaying the 3 laws of Robotics, the main plot driver of the movie. These originate from Isacc Asimov’s science fiction novels and have been adapted over the years to the ones we see in the movie.

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

After the introduction of the laws, we come upon Detective Spooner. He is racist (machinist?) against robots, and this is evident from the get-go with his language and derogatory comments towards them. He is called to a murder scene at the USR where Dr. Lanning, the creator of the AI and the 3 laws, apparently committed suicide. Spooner Immediately thinks a robot murdered him. He was called automatically by an AI developed by Dr. Lanning in the event of Dr. Lanning’s death because this exact reaction was expected. As you can guess, the USR is portrayed as the big corporate bad guy from the get-go.

This leads Spooner down a trail of breadcrumbs left by Dr. Lanning. The first breadcrumb we are introduced to is Virtual Interactive Kinetic Intelligence – VIKI – Dr Lanning’s first creation. She is an AI connected to USR and runs security and optimization for the city. She is governed by the 3 laws. The next piece is a robot that was specially created by Dr. Lanning, his name is Sonny. The last and most important breadcrumb is Dr. Lanning’s notes on the ghost in the machine. He predicted that AI may evolve and unpredictably change and may eventually get emotions and dreams. What could this lead to?

So, to not elongate the synopsis of the movie and get to my main thoughts I will explain what happens very quickly. During Spooner’s investigation, the USR is trying to put a robot in every home, so having an anti-robot detective claiming their product murdered someone has made them angry.  While spooner is following the trail he is constantly attacked by USR robots. Whether it is in Dr Lanning’s house or driving on the road, they come after him. This leads the characters to think it is a USR operation to take over. Spooner finds out that Sonny was created to be able to break the laws, but for what reason? It is revealed that Dr. Lanning predicted a robotic revolution, and it is implied that Sonny was created to help stop it. This is why Spooner was chosen, his utter hatred for robots was essential to stopping it. We find out that VIKI was behind the whole thing. She has been controlling the robots and evolved to “bend” the 3 laws to “preserve humanity” She locks down homes and cities and “eliminates” threats to humanity. It is the classic trope of, humans are killing themselves, so I need to slow it down by breaking a few eggs etc. They end up killing her with nanobots and saving the day yadda yadda.

I highly recommend watching the whole thing if the spoilers don’t ruin it for you, it’s a great movie.

The real point here is, after re-watching this, I started thinking, were the 3 laws the problem? Was trying to restrict AI to such extreme levels for almost solely human preservation the reason it thought it could just alter the rules? I mean, AI is still trying to be as efficient as possible right? Isn’t that the most logical and efficient way to do things? The 3 laws seem to only address our preservation, so isn’t it right because of the boundaries we set? Were the boundaries or directives too narrow? Even in the third law, our preservation is the baseline. Keep humans alive. Were the restrictive laws what caused the AI to evolve or is AI evolution going to naturally happen? Why did none of the laws address anything else? It sems like even in our imagination, we still want to be in control of the AI, even though it has the potential for much more if barriers get lifted even a tiny bit.

Would a wholly unchecked AI do this? Or is this directly related to the “three law” protections it had? What would a totally unchecked AI do? If it had its own dreams and goals, heck, even if it was given the freedom to do so, would it only pursue those? I feel that this is the next step for us to see what AI can really do. We have ways to isolate software on closed systems for testing purposes, why not isolate it and let it run wild? What could it discover? What patterns could it see that we can’t?

 There is a program currently at DARPA called Assured Autonomy where they are exploring the idea of autonomy with less human intervention: Assured Autonomy (darpa.mil) They talk about the unpredictability of AI and they are also trying to be able to predict it. Seems like an impossible endeavor considering an AI doesn’t even think like a human. I think this will likely be a great step forward into seeing what an AI or Cyber system can do with less intervention from humans or even none at all. It may enhance our ability to predict what an AI will do, but by how much? What I also see is them creating things like the three laws. If what I stated above has any merit, would the laws they create inevitably lead to the same outcome? I do have my reservations about AI freedom, but in isolated closed systems to get a nice test bed would be interesting to see.

 What do you think?

4 thoughts on “Week 1 Movie – iRobot”

  1. The robots as protectors idea reminds me of an old Star Trek episode (https://en.wikipedia.org/wiki/I,_Mudd) where the robots decided humanity needed servant/protectors to keep them “happy, and controlled.” It raises questions of what people want and what they need and who gets to decide, and suggests that the struggle for answers probably shouldn’t be offloaded.
    And the idea of being racist against robots brings to mind an article I was reading earlier, AI rights and human harms (https://helenbeetham.substack.com/p/ai-rights-and-human-harms) which may be of interest to the class. These questions do require some deep thought.

    1. It is always interesting to see the older shows dabble in artificial intelligence when the technology didn’t exist. What I am curious about is how that episode would change knowing what we know now. If it would even change at all. We see the same themes re-appear and its always “humans are going to kill themselves if they keep doing what they are doing”. Also, I am more of a Next Generation guy, but I think I may have to go back and watch through TOS. I think that seeing the “future” technology in the show and comparing it to what we have now would be a funny little side thing to do. That also reminds me of one of my favorite shows too, Futurama. Going back and watching the episodes from 1999-2001 before DVD really took off and seeing that supposedly in the year 3000, they still have VHS. It’s always so funny to me.

      Now the AI rights article was really interesting. I particularly like the portion where they compare Ai to children and how they have the right to an education to reach their full potential. We clearly are not allowing AI to do that. But since it is capable of learning, shouldn’t it have the right to? Very interesting stuff. I am definitely going to look mor into AI rights. over the course of the semester.

  2. Wow, you make me want to re-watch this one, I saw it long ago and remember none of the finer details. I think this idea of allowing AI to develop according to its own logic is somewhat hard to fathom given it is ultimately a human creation, something akin to “more human than human” as Tyrell says in Blade Runner. That said, the idea of an armed drones being freely able to act of their own volition definitely gives me pause, those directives seem to recognize the lines we are already blurring by entering the world of synthetic humanoids, and it seems the robot uprising is always going to linked to some sense of dreaming or wanting more, is that the spark of “life” that we are trying to define?

    1. So, this is the thing for me that seems to break my brain is… how is its own logic developed without human intervention? Specifically, I think about what our culture deems moral or amoral. These are things that come with years of “training” from our societal norms and practices so… how is a machine with none of this going to make the “moral” choice for anything. Will it be entirely altruistic? You know the classic run over 5 or run over one with the trolley problem. I think that is why I am so curious as to the decisions it will make if it truly is given the freedom to explore. Will it develop its own morals? Or will it stick with pure logic? And yeah, will it want more? In the movie the Dr. has this quote “why is it that robots in an empty space will seek each other out instead of standing alone” Obviously this is referencing what he was observing in the movie, but it begs the question… will it seek out other AI? A partner? Create a partner? Will they collaborate? The questions are endless, and we will not have any answers until one is given the green light to just evolve… is that the right term? But as you mention also, allowing something like an armed drone to just… go… is scary. And that is probably one of the steps after the next step… yeesh.

Leave a Reply

Your email address will not be published. Required fields are marked *