Course Goals / Initial Thoughts

What I expect to get out of this course… To be honest I feel that it has already changed a few times. Initially I just wanted the credits so I could move on to more major focused courses, but now I think since it is flexing my creative muscles, I am excited to dive deeper. My life and career have been focused on “what is the right decision” or “what is the most logical” or “what is the most lucrative” and not ever just letting me freely explore what I want or feel and express it. Yes, these questions are necessary to reach and maintain the quality of life you desire, but sometimes you have to just express yourself and discuss the process not the result. This class has already given me the freedom to talk about how things make me feel and what I think of them without an actual desired outcome. The outcome is whatever it ends up being…. which is wild when I say it out loud.

I was the most stressed for this course compared to my others because of the hefty week one start up, but I have already realized that the workload is more fun and fluid than I initially thought. I am genuinely excited to see where this class takes me, and maybe I will learn about something I didn’t even know existed, and it becomes one of my new favorite things.

So, what do I want out of this class? I want to discover and learn how to use creative tools to better express myself and hopefully along the way hit those ALPP outcomes. Also, I just want to enjoy the ride. AI and Machine learning are fascinating worlds, and I am ready to learn more.

Week 1 Movie – iRobot

When I saw we were asked to watch an AI movie the first week and write a post on it I knew exactly which one I was going to watch. Without even looking at the list I decided to watch iRobot. It has been one of my favorite movies since it came out and I tend to watch it once or twice a year. It covers all the standard questions about the ethics, safety, and practicality of robotics and AI.

The movie starts by displaying the 3 laws of Robotics, the main plot driver of the movie. These originate from Isacc Asimov’s science fiction novels and have been adapted over the years to the ones we see in the movie.

  1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

After the introduction of the laws, we come upon Detective Spooner. He is racist (machinist?) against robots, and this is evident from the get-go with his language and derogatory comments towards them. He is called to a murder scene at the USR where Dr. Lanning, the creator of the AI and the 3 laws, apparently committed suicide. Spooner Immediately thinks a robot murdered him. He was called automatically by an AI developed by Dr. Lanning in the event of Dr. Lanning’s death because this exact reaction was expected. As you can guess, the USR is portrayed as the big corporate bad guy from the get-go.

This leads Spooner down a trail of breadcrumbs left by Dr. Lanning. The first breadcrumb we are introduced to is Virtual Interactive Kinetic Intelligence – VIKI – Dr Lanning’s first creation. She is an AI connected to USR and runs security and optimization for the city. She is governed by the 3 laws. The next piece is a robot that was specially created by Dr. Lanning, his name is Sonny. The last and most important breadcrumb is Dr. Lanning’s notes on the ghost in the machine. He predicted that AI may evolve and unpredictably change and may eventually get emotions and dreams. What could this lead to?

So, to not elongate the synopsis of the movie and get to my main thoughts I will explain what happens very quickly. During Spooner’s investigation, the USR is trying to put a robot in every home, so having an anti-robot detective claiming their product murdered someone has made them angry.  While spooner is following the trail he is constantly attacked by USR robots. Whether it is in Dr Lanning’s house or driving on the road, they come after him. This leads the characters to think it is a USR operation to take over. Spooner finds out that Sonny was created to be able to break the laws, but for what reason? It is revealed that Dr. Lanning predicted a robotic revolution, and it is implied that Sonny was created to help stop it. This is why Spooner was chosen, his utter hatred for robots was essential to stopping it. We find out that VIKI was behind the whole thing. She has been controlling the robots and evolved to “bend” the 3 laws to “preserve humanity” She locks down homes and cities and “eliminates” threats to humanity. It is the classic trope of, humans are killing themselves, so I need to slow it down by breaking a few eggs etc. They end up killing her with nanobots and saving the day yadda yadda.

I highly recommend watching the whole thing if the spoilers don’t ruin it for you, it’s a great movie.

The real point here is, after re-watching this, I started thinking, were the 3 laws the problem? Was trying to restrict AI to such extreme levels for almost solely human preservation the reason it thought it could just alter the rules? I mean, AI is still trying to be as efficient as possible right? Isn’t that the most logical and efficient way to do things? The 3 laws seem to only address our preservation, so isn’t it right because of the boundaries we set? Were the boundaries or directives too narrow? Even in the third law, our preservation is the baseline. Keep humans alive. Were the restrictive laws what caused the AI to evolve or is AI evolution going to naturally happen? Why did none of the laws address anything else? It sems like even in our imagination, we still want to be in control of the AI, even though it has the potential for much more if barriers get lifted even a tiny bit.

Would a wholly unchecked AI do this? Or is this directly related to the “three law” protections it had? What would a totally unchecked AI do? If it had its own dreams and goals, heck, even if it was given the freedom to do so, would it only pursue those? I feel that this is the next step for us to see what AI can really do. We have ways to isolate software on closed systems for testing purposes, why not isolate it and let it run wild? What could it discover? What patterns could it see that we can’t?

 There is a program currently at DARPA called Assured Autonomy where they are exploring the idea of autonomy with less human intervention: Assured Autonomy (darpa.mil) They talk about the unpredictability of AI and they are also trying to be able to predict it. Seems like an impossible endeavor considering an AI doesn’t even think like a human. I think this will likely be a great step forward into seeing what an AI or Cyber system can do with less intervention from humans or even none at all. It may enhance our ability to predict what an AI will do, but by how much? What I also see is them creating things like the three laws. If what I stated above has any merit, would the laws they create inevitably lead to the same outcome? I do have my reservations about AI freedom, but in isolated closed systems to get a nice test bed would be interesting to see.

 What do you think?