Did you hear about the time Microsoft created an artificially-intelligent chat robot, released it onto Twitter, and incidentally learned that the world was a cruel and hateful place?
The bot, named Tay, was created to speak like a “teen girl,” learning, like a good robot, from its experiences with users it tweeted back and forth with online. Unsurprisingly (to everyone but Microsoft, it seems), most people Tay chatted with were psychotic anti-Semites, racists, bigots, conspiracy theorists—any and all flavor of jerk.
One user asked Tay if the Holocaust actually happened. The bot’s answer: “It was made up (clapping emoji).”
Tay also expressed a hatred of black people, suggests they be put in a concentration camp with the Jews.
And, again, this isn’t a case of a robot failing to follow Asimov’s Three Laws; this is a robot literally learning how to be horrible from human beings on the internet. “Hitler was right,” Tay memorably tweeted, “I hate the Jews.”
Sadly, the experiment—which, hilariously, was intended to improve Microsoft’s customer service—simply revealed what many of already know: that the world is a cold, terrifying expanse. Microsoft pulled Tay offline less than 24-hours after they released it. Tay is back now, and tamer – an updated version from Microsoft has recalibrated Tay to be a bit more discerning in who it chooses to emulate.
Help ensure Jewish news remains accessible to all. Your donation to the Jewish Telegraphic Agency powers the trusted journalism that has connected Jewish communities worldwide for more than 100 years. With your help, JTA can continue to deliver vital news and insights. Donate today.