Feature ArticleGamingPS4

Detroit: Become Human – Robots and Toasters.

Time for another guest article from Nina, who last time bought us some deeper thoughts on Hellblade. This time around some casual musings on the Playstation exclusive Detroit:Become Human.

In typical David Cage fashion, Detroit: Become Human tackles some touchy social issues including race relations and equality, but what I found most compelling about the game was how it tackled some philosophical issues that involve the collective mindsets of humanity, the nature of machines with Artificial Intelligence and what the true definition of ‘alive’ is.

For those who need a quick refresher, Detroit: Become Human takes place in the near future, where androids with near human intelligence have become part of everyday life, doing labour that humans no longer want or can do, such as being construction workers, caretakers and even prostitutes. But most just serve as personal slaves for everyday humans.

The androids look, sound and act just like humans, but as far as society is concerned, they are nothing more than pieces of plastic that perform tasks and follow orders like drones. There are no laws against android abuse or misuse, meaning the public can do whatever it wants to the androids and the government wouldn’t even bat an eye. To coin a phrase from the great Battle Star Galactica, essentially there is no difference between an Android and a Toaster. Humanity hadn’t even turned their minds to the idea of their machines developing free will and emotions, so when that starts to happen in a phenomenon called Deviancy, all hell breaks loose. Check out the KC review  to get fully caught up.

The game follows this story through three different characters: Kara, a house keeping android gone deviant, trying to escape the country with a little girl in her care. Markus, a care taking android turned deviant revolutionary, and (my personal favorite) Connor, an android deviant hunter who is best described as the robotic love child of John Wick and Sherlock Holmes.

Like all the narratives arcs in the game, ‘my’ story was heavily affected by my choices and how I responded to certain situations. However, what made Connor’s story so interesting was how unlike Kara and Markus, who both went deviant in the first act regardless of my choices, Connor is actually on the human’s side for a good chunk of the story, assisting the Detroit Police Department in hunting deviants and helping CyberLife as the company tries to figure out why their androids are becoming deviant.

This left me as the player in a very interesting position. I could either continue on as a drone for Connor’s creators, Cyberlife, and fight the revolution for the rest of the story, or I could start to deviate myself and explore how a machine like Connor could develop free will and morality.

As a result of my chosen path of play, in my play through Connor was a somewhat goofy, socially awkward but likeable guy who also had Arkham games style detective abilities and Jason Bourne level, ass kicking skills. He also had a great, buddy-cop partnership with eccentric, but troubled Lieutenant Hank Anderson and happened to be quite a handsome young creature. So even before I knew the interesting turns his story would take, Connor was my favourite character. But once I discovered this option to play him as an android learning of its humanity rather than just a machine, I couldn’t help but get 100% engrossed in his story.

I think the reason I was more invested in Connor’s story than the other two was because his ultimate fate is entirely up to what I believed. Am I like the rest of society in the game and see androids as nothing more than machines? Or do I see them as a new form of intelligent life? Beings with emotions, empathy and free will who deserve to be treated as such?

And the game didn’t just leave me to ponder this on my own, because I was constantly put in situations where I had to decide my position on issue. To put it simply, almost every major choice I made in Connor’s story boiled down to one simple question; what is more important, the mission or morality?

Before Connor’s “moment of truth” chapter ‘Crossroads’, the game gave me one final chance to decide my position on this topic. In the chapter  ’Meet Kamski’, I was met with the Kamski Test. Conceived by Elijah Kamski himself, the test determines whether or not a being is capable of empathy. The test (in Connor’s case)  went as follows: Kill Kamski’s innocent android Chloe in exchange for information about deviants, or spare Chloe and leave Kamski’s learning nothing.  It felt the game was giving me one final chance to answer its philosophical questions before the climactic finale.

I think the thing about this Test that made it feel so final was that in other chapters, where you were presented with dilemas similar in nature to the Kamski Test, they were diluted with other variables to take into account. For example, in the chapter ‘Edens Club’, I had to choose to either shoot or spare a deviant prostitute android trying to escape after a crime.

In my play through, I chose to shoot the Traci, not because it was my mission but because she was rushing me, and having just come out of a very stressful QTE fight scene where she had stabbed Connor several times with a screwdriver, I wasn’t taking any chances with his life. My decision wasn’t out of moral agency, it was out of self-defense.

But here in ‘Meet Kamski’, there was no other variables, there was no other excuses or alibi. There was only me and the overarching question of Connors internal struggle: is the mission more important than your morality? Is Chloe just a piece of plastic that can be replaced or is she a living being with a soul that does not deserve to be sacrificed?

This test was less about Connor and more about me, the player. In the context of the game, Kamski is asking Connor to decide, but since I was the one controlling his choices and actions, Kamski is really asking what I think. In reality, the Kamski Test isn’t testing whether or not a machine is capable of empathy; it’s testing whether or not I, a human, can empathize with a machine. It’s testing whether or not I classified an advanced android such as Chloe, and by extension Connor or Markus or Kara, as alive.

The video game aside, this question is arguably one of the oldest and most heavily debated philosophical subjects in history and nowadays, it has become one that we soon may need answers to. With the rapidly increasing  advancement and focus on developing true AI, there is no doubt that we will see androids as advanced and, well, human as Connor, Markus and Kara. But as for if they would be alive? That is, and probably will always be, up for debate.

Personally, there wasn’t a single moment where I didn’t think of Connor, or any of the other protagonists, as people. Sure, Connor is absolutely hopeless in any social situation that didn’t involve reconstructing a crime scene or taking out a bunch of dudes John Wick style, but I saw that as a result of him being socially awkward, not an android.

As I played and reflected on my experience, one thing did make itself very clear. If machines eventually become as integrated into our society as the androids in Detroit are, I think that humanity is going to need a serious period of growth with regards to its attitude towards those who are different. Humanity has a terrible track record when it comes to accepting those who are different; just look at where Detroit got its inspiration from.

Androids are segregated and enslaved in a similar way African-American people were in America pre the civil rights movement. Androids wear blue bands and triangles on their clothes as identifiers, similar to how Jews had to wear the Star of David and armbands on their clothes during WW2. On top of that, during the revolution chapters of the game,  captured androids are rounded up and detained in camps that resemble Nazi concentration camps just a little too much.

In short, humanity can barely accept different peoples that fall under its own umbrella, so how is it supposed to accept a people that aren’t even technically alive? Quantic Dream and Detroit: Become Human put it very simply; it probably can’t.

There are so many moments throughout all three stories that honestly made me feel a little bit ashamed to be a human being. As a 15 year old Kiwi girl who was just playing some video games to kill some time, it shocked me how potent the game’s philosophical questions and presentation of humanity was, (granted its symbolism was about as subtle as a brick to the face, but that’s perhaps a symptom of the medium as opposed to the writing, I dunno), and I think the driving force behind this was trying to present the AI in a more futuristic and interesting way.

Most stories about AI are about how it could take over the world and destroy us all, but Detroit was different. It’s AI wasn’t an all powerful, godlike, in-human device like Terminator’s Skynet or Portal’s GLaDOS. It was a new people or race who suffered from humanity’s mass ignorance for those who are different.

Detroit: Become Human is a game packed to the rafters with heavy themes, ideas and questions that will continue to linger on your mind long after you have finished it. Sure, some narrative moments can seem a bit ham-fisted and the larger set pieces care ripped from pivotal race relation moments from history are being used as inspiration for mere entertainment, but it still managed to touch me.

Now these thoughts of mine only focused on one of the various possible pathways in Connor’s story-lines and I didn’t really focus on Kara’s and Markus’s stories, as the places those discussions could go would lead to many, many more paragraphs.

Connor’s story not only gave me an enjoyable narrative driven experience. It presented me with a philosophical question about the future of machines, the nature humanity, and what it really means to be alive. And I gotta say, for a Video Game….that’s pretty cool.