A group of MIT researchers created a series of experiments to examine and illustrate the countless possibilities of artificial intelligence and machine learning. While their earlier experiments explored AI’s ability to induce emotions through the “Nightmare Machine”, followed by “Shelley”, that was designed to create horror stories and “Deep Empathy”. Their newest creation – “Norman”, named after the lead character from Hitchcock’s suspense thriller – “Psycho”, is what they like to call – “the world’s first AI psychopath”.
After being exposed exclusively to a sub-reddit discussion that was allegedly “dedicated to document and observe the disturbing reality of death”, Norman, like a lot of other artificial intelligence was asked to participate in the classic Rorschach ink blot personality test. However, where other AI responses were “a black and white photo of a baseball glove”, Norman offered a different, and more unconventional (to say the least) perspective of “Man is murdered by machine gun in broad daylight”. Norman’s interpretations took a disturbing bend where every ink blot took the form of a depiction of a violent death.
The aim of this experiment was to illustrate the repercussions and potential dangers of feeding AI with the wrong data. While most people tend to blame faulty algorithms for what they regard as inappropriate responses on the part of AI, Norman is proof that AI’s responses are more to do with the kind of content they are exposed to. While artificial intelligence is designed to function appropriately in a particular setting, machine learning enables it to feed on the information it is given and emulate learned behavior in order to thrive in a given environment.
The creation of Norman sets a precedent for an overwhelming number of possibilities that arise with machine learning and artificial intelligence. Because, when you think about it, the diary functions on the basic premise that artificial intelligence is built on. And after being exposed to first hand information about the life and lies of a power driven, psychopath, it eventually uses its intelligence to emulate his sociopathic tendencies through learned behavior.
With the growing popularity of chat bots, it isn’t going to be long before the internet is plagued with virtual identities that mimic the behaviors of the people that create them. How would a user be able to differentiate between a human and a machine emulating a human? ‘’With great power comes great responsibility”, and if history is any indicator – humans don’t do well with power. What happens when artificial intelligence is inevitably used for all the wrong reasons?
However, on a more optimistic note, the world (even the digital one) has managed to maintain a balance of good and evil so far – for every virus, an anti-virus. The virtual villains may begin to spawn across the digital world, but they’ll have their superhero counterparts to contend with.