The work on artificial intelligence (AI) and its applications has taken a bizarre turn with a research team creating a character that resembles and reacts like the Psycho movie character Norman Bates, a psychopath.
The AI invention was developed by a team of three engineers at the Massachusetts Institute of TechnologyTop of FormBottom of Form, and the institution has to its credit earlier work in the same genre.
The essence of this research is to demonstrate that the AI-based algorithms learn and act on the basis of what is fed to them in terms of information or inputs or exposure. Their distinct tests, including psycho-analytical tests, have proved their theory on this.
The AI Feed Was Horror and More Horror
Iyad Rahwan, Pinar Yanardag and Manuel Cebrian are the names of the three engineers who worked on the Norman AI project. They codenamed the character “Norman” straight from the movie made by the famous Hollywood film producer Alfred Hitchcock, titled Psycho.
They might have had their own premonitions or wanted to drive home the point that the AI would react the way it is taught during training, not very different from the way humans would do when thrown into certain situations.
Their experiment consisted of two AI characters, one dubbed Norman the psychopath (as above) and another AI that would have been given the standard machine learning attributes.
The Norman AI was being repeatedly shown some images from the site Reddit.com along with the captions on them and nothing else. The Reddit content used in the experiment were posts displaying graphic imagery associated with death, meaning the AI was being trained to look at negative material.
The Test and the Eerie Results
The three-engineer team then put the two AI characters through the Rorschach test, a psychological test prescribed for humans. The actual test conducted subjecting both the AI characters to view certain images that contained inkblots in a random design, more like a modern art painting.
When asked what the image looks like, the standard AI had responded saying it felt it looked like a wedding cake kept on top of a table. But the Normal AI saw in the same images the message that a murder has been committed and a person was shot using a machine gun.
They felt it was a vindication of their theory that with machine learning, the outcome is very much directly linked to the information being fed to them continually. And if someone wants to rectify the situation, then one has to focus on the information that is being fed to the machine using positive imagery and avoid training the AI in any negative way.
Not the First Such Development by MIT
One perspective in the work being done by the MIT team is that they are alerting the society at large that such a situation could arise and this could show the evil side of technology.
MIT has been doing such experiments in the past too. Readers may recall the Nightmare Machine project run by MIT in which AI technology was deployed to elicit fear and other extreme emotions.
If that was not enough, what followed was even more potent as they produced a horror writer using AI as the technology driving the character Shelley, which would churn out horror stories. There was a human element to help the AI character to pen down over 200 stories so each could be scary in nature.
The logic behind MIT encouraging such projects to explore the darker side of technology instead of focusing on its merits without question.