Ex Machina

I re-watched Ex Machina last night. Good movie. It addresses questions of artificial intelligence in general, and possible consciousness in machines and consequences thereof in particular. If you’re interested in these issues, it’s worth watching. Contrary to the multitude of movies in this area, it’s scientifically and technically solid. OK, there’s just one thing I will never understand: why do robots in movies have some lights on or inside their bodies?!

In short, the film tackles the question if a human could experience a humanoid-looking robot as human, and if the robot could convince the human to do its bidding. Connected is the question if the robot actually might have a conscious experience, not just acting as if it had one. There are just three main actors, Ava, the robot, Nathan, who has designed and built Ava, and Caleb, who is having the conversations with Ava. These conversations with Ava are pretty dense, and it’s easy to miss important statements, or reactions, if you don’t have at least a basic understanding of the issues at hand.

There’s the famous Turing test, which tries to answer the question if a specific machine is undistinguishable from a human, based on a conversation via text messages between a human and a counterpart, where the human does not actually see the machine, actually does not even know if he, or she, talks to another human or a computer. If the human cannot determine if the counterpart is a human or a machine, that is, the machine acts exactly as a human, the machine has passed the test and counts as having human level intelligence. Of course, the human would ask probing questions, and analyse the answers. In the movie, it’s obvious that Ava passes the Turing test with flying colours. She is able to engage in a personal conversation and even make jokes, and she shows emotions, both about herself and the questioning human, Caleb.

As Caleb actually knows, and sees, that Ava is a robot, albeit a smart and pretty one, this is sort of an extended, or advanced, Turing test. He clearly accepts Ava as a human counterpart. Ava has been locked into a room for all of her existence, and Caleb speaks to her separated by a glass wall. When she expresses the wish to escape her prison, he is compassionate, just as one would be for a human locked into a room for a lifetime. Of course she is aware to be a robot, and is thus afraid to be only a single specimen, a version, in a longer chain of technical development steps, and hence would eventually be replaced by a better, improved version. She is afraid of what would happen to her then. As Nathan explains, he would download her mind and improve it with new software modules, erasing her memory in the process. Ava understands that she as an individual, with all the accumulated experiences and memories of all sorts (knowledge, emotions, thoughts, etc.) during her existence, would cease to exist. It would be like death. Again, Caleb understands her fears just as if she were human. He will help her to escape, however not exactly with the result as he planned.

Not to spoil the plot, let me stop here and offer a few remarks and observations. Ava appears to be conscious, with memories, emotions, deep understanding of her perceptions, and self-consciousness. However, it’s impossible to say if she really is, or is just programmed to evoke the perception of consciousness in Caleb. Consciousness is about subjective experiences, to feel to be like something, or someone. An artificial intelligence (AI) could be programmed to even speak about emotions and so on, but without actually feeling them. Ava surely convinces Caleb that she suffers from her limited existence in her room, and from the prospect of being reprogrammed and loosing all her memories. However, this could be just advanced information processing, based on lifeless perceptions, data collection, and corresponding machine states. The Turing test is not meant to discover consciousness, just intelligence. I think the end of the movie hints at Ava not actually being conscious, as otherwise she would show a rather psychopathic mind, which of course is possible as well, given her upbringing.

But as of today, no-one really knows how consciousness is created in sentient beings. It appeared at some point in evolution. Probably all mammals have it, possibly some birds and the octopus. Maybe it’s just based on certain patterns of highly integrated information processing, but maybe there is more. It is yet unclear, if an AI could develop consciousness, and what the corresponding conditions would be. I have heard arguments that life is necessary for a conscious mind, so that even if all information processing our brain can do can also be done in silicon – that is, if human-level AI is possible –, consciousness will never emerge on the latter. But there are arguments to the contrary.

However, let’s not make the mistake to assume that an AI-based consciousness would be anything like a human one. Heck, we don’t even have any idea what non-human animals’ consciousness is like. We don’t know, cannot know, how it is to be a dog, or a bat, or a cow. In particular, we don’t know how animals suffer, for instance with factory farming. It always strikes me as completely odd when one hears about regulations to ensure “humane treatment of animals”. Sure, applying humane standards to animals is good, but it’s the least we can do given our limited knowledge, not the best. Thinking forward, to when we will have AIs everywhere in our lives – be it embodied in moving robots or just stationary in a computer rack –, how do we make sure such AI does not suffer? Think of Ava, in case she is actually conscious, being locked into her room, fearing death by erasure – which just relates to human emotions, but are there maybe types of suffering specific to her kind which we humans might not even start to understand? Do we have the right to build and use AIs for our purposes, if we have no idea how, if, or when consciousness emerges, as we don’t yet understand it? Wouldn’t it be monstrous to create a suffering artificial creature? I hope consciousness research will have answers before AIs have reached the sophistication status of being potentially conscious, although I am afraid the tech industry will march forward without giving this aspect too much thought.

Say you had an elderly care robot in the future, would you switch it off if you knew, or at least suspected, it fears just that?

Watch the movie. It’s thought-provoking. Trailer:

You need to enable JavaScript to load and watch the video.

on YouTube
  • Note: YouTube and Google inject extensive tracking for embedded video, hence please watch directly on their website. Not that they don't track you there, mind you, but I don't want to contribute to their racket from this site. It would also violate my privacy policy.