Gee… Thanks Dr. Oblivion…

I started the movie review re-write by asking Dr. Oblivion about the movie iRobot. I asked him about theme and hidden meaning and even about easter eggs within the movie. A lot of it was not too far off from what I wrote. I wrote about the themes of ethics of AI and human control and posed questions about AI independence. All of these were identified by Dr. Oblivion as key points being made in the movie. So, I actually don’t know if a re-write is in order. Apparently, I wasn’t really far off the mark. That made me feel pretty good that I had a good understanding of the movie and what it was trying to say.

However, when I asked Dr. Oblivion to write his own review, this is where things got interesting. He absolutely ROASTED the film. Please listen to this absolute masterpiece of a slam session.

Call the burn unit! I was not expecting this at all. I was anticipating a technical review that spoke about the themes and questions I was asking him before, but no, he just roasted it. Now I am re-thinking if this is even a good movie. Did I just fall prey to the classic tropes. Is this movie too safe? Does it really even make me think? Or does it just display these themes in such an easy-to-digest story that I believed I was insightful… In my first review, I mentioned it made me think more about the “deeper meaning” in iRobot. Is there even one? According to Dr. Oblivion, it has been done over and over and there is nothing new here to learn. I know it has been done before but surely it had to bring something new to the table… Nope. I have now been thrashed by an AI and now I feel like a buffoon. Thanks Dr. Oblivion.

Week 1 Movie – iRobot

When I saw we were asked to watch an AI movie the first week and write a post on it I knew exactly which one I was going to watch. Without even looking at the list I decided to watch iRobot. It has been one of my favorite movies since it came out and I tend to watch it once or twice a year. It covers all the standard questions about the ethics, safety, and practicality of robotics and AI.

The movie starts by displaying the 3 laws of Robotics, the main plot driver of the movie. These originate from Isacc Asimov’s science fiction novels and have been adapted over the years to the ones we see in the movie.

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

After the introduction of the laws, we come upon Detective Spooner. He is racist (machinist?) against robots, and this is evident from the get-go with his language and derogatory comments towards them. He is called to a murder scene at the USR where Dr. Lanning, the creator of the AI and the 3 laws, apparently committed suicide. Spooner Immediately thinks a robot murdered him. He was called automatically by an AI developed by Dr. Lanning in the event of Dr. Lanning’s death because this exact reaction was expected. As you can guess, the USR is portrayed as the big corporate bad guy from the get-go.

This leads Spooner down a trail of breadcrumbs left by Dr. Lanning. The first breadcrumb we are introduced to is Virtual Interactive Kinetic Intelligence – VIKI – Dr Lanning’s first creation. She is an AI connected to USR and runs security and optimization for the city. She is governed by the 3 laws. The next piece is a robot that was specially created by Dr. Lanning, his name is Sonny. The last and most important breadcrumb is Dr. Lanning’s notes on the ghost in the machine. He predicted that AI may evolve and unpredictably change and may eventually get emotions and dreams. What could this lead to?

So, to not elongate the synopsis of the movie and get to my main thoughts I will explain what happens very quickly. During Spooner’s investigation, the USR is trying to put a robot in every home, so having an anti-robot detective claiming their product murdered someone has made them angry.  While spooner is following the trail he is constantly attacked by USR robots. Whether it is in Dr Lanning’s house or driving on the road, they come after him. This leads the characters to think it is a USR operation to take over. Spooner finds out that Sonny was created to be able to break the laws, but for what reason? It is revealed that Dr. Lanning predicted a robotic revolution, and it is implied that Sonny was created to help stop it. This is why Spooner was chosen, his utter hatred for robots was essential to stopping it. We find out that VIKI was behind the whole thing. She has been controlling the robots and evolved to “bend” the 3 laws to “preserve humanity” She locks down homes and cities and “eliminates” threats to humanity. It is the classic trope of, humans are killing themselves, so I need to slow it down by breaking a few eggs etc. They end up killing her with nanobots and saving the day yadda yadda.

I highly recommend watching the whole thing if the spoilers don’t ruin it for you, it’s a great movie.

The real point here is, after re-watching this, I started thinking, were the 3 laws the problem? Was trying to restrict AI to such extreme levels for almost solely human preservation the reason it thought it could just alter the rules? I mean, AI is still trying to be as efficient as possible right? Isn’t that the most logical and efficient way to do things? The 3 laws seem to only address our preservation, so isn’t it right because of the boundaries we set? Were the boundaries or directives too narrow? Even in the third law, our preservation is the baseline. Keep humans alive. Were the restrictive laws what caused the AI to evolve or is AI evolution going to naturally happen? Why did none of the laws address anything else? It sems like even in our imagination, we still want to be in control of the AI, even though it has the potential for much more if barriers get lifted even a tiny bit.

Would a wholly unchecked AI do this? Or is this directly related to the “three law” protections it had? What would a totally unchecked AI do? If it had its own dreams and goals, heck, even if it was given the freedom to do so, would it only pursue those? I feel that this is the next step for us to see what AI can really do. We have ways to isolate software on closed systems for testing purposes, why not isolate it and let it run wild? What could it discover? What patterns could it see that we can’t?

 There is a program currently at DARPA called Assured Autonomy where they are exploring the idea of autonomy with less human intervention: Assured Autonomy (darpa.mil) They talk about the unpredictability of AI and they are also trying to be able to predict it. Seems like an impossible endeavor considering an AI doesn’t even think like a human. I think this will likely be a great step forward into seeing what an AI or Cyber system can do with less intervention from humans or even none at all. It may enhance our ability to predict what an AI will do, but by how much? What I also see is them creating things like the three laws. If what I stated above has any merit, would the laws they create inevitably lead to the same outcome? I do have my reservations about AI freedom, but in isolated closed systems to get a nice test bed would be interesting to see.

 What do you think?