Bonus form HRForecast: three stories about failed chatbots.
#1 The bot that upset Taylor Swift
In 2016, Microsoft released digital teenager Tay, who was introduced as “our man, super cool”. This chat-bot learned from those who wrote to him and picked not the best role models. After talking to some users, Tay learned to send inflammatory and offensive messages supporting Nazism, genocide, and racism. Tay’s life was short. Just 16 hours after launching, the company closed the project. Taylor Swift wanted to sue Microsoft for nefarious chatbot named after her. So, an attempt to personalize the bot can have legal consequences.
#2 The bot that became sexist
Amazon decided to use a special algorithm for recruiting personnel. In 2014, the team created software to view job applicants’ resumes. Its goal was to automate the search for top talents. But a year later, the company realized that when selecting potential employees for technical positions, the system sifted out women. The problem was that Amazon’s computer models had been analyzing resumes for the past 10 years. Since most resumes were made by men, the algorithm found them preferable. Later, Amazon discovered a bunch of other flaws in its system and shut down the project.
#3 The bots that flirted and made money
Ashley Madison is a Canadian social network for people in marriage or serious relationships. It was founded in 2002 with a simple message: “Life is short. Start an affair.” In July 2015, a group of hackers leaked personal information about 37 million users, including their real names, home addresses, search history, and credit card transaction records. As a result of the hacking, an interesting detail has surfaced. Ashley Madison claimed that the men and women on the platform are equal. And that disproves the stereotype that the former are more prone to cheating.