<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192888919167017&amp;ev=PageView&amp;noscript=1">
Friday,  November 15 , 2024

Linkedin Pinterest
News / Business

Microsoft takes bot offline after abuse

By Jing Cao, Bloomberg
Published: March 24, 2016, 4:03pm

Microsoft is in damage control mode after Twitter users exploited its new artificial intelligence chat bot, teaching it to spew racist, sexist and offensive remarks.

The company introduced the bot earlier this week to chat with real humans on Twitter and other messaging platforms. The bot learns by parroting comments and then generating its own answers and statements based on all of its interactions.

It was supposed to emulate the casual speech of a stereotypical millennial. The Internet took advantage and quickly tried to see how far it could push Tay.

The worst tweets are quickly disappearing from Twitter, and Tay itself has now also gone offline “to absorb it all.” Some Twitter users appear to think that Microsoft had also manually banned people from interacting with the bot. Others are asking why the company didn’t build filters to prevent Tay from discussing certain topics, such as the Holocaust.

“The AI chatbot Tay is a machine learning project, designed for human engagement,” Microsoft said in a statement. “It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

The bot was targeted at 18- to 24-year-olds in the U.S. and meant to entertain and engage people through casual and playful conversation, according to Microsoft’s website. Tay was built with public data and content from improvisational comedians.

It’s supposed to improve with more interactions, so should be able to better understand context and nuances over time. The bot’s developers at Microsoft also collect the nickname, gender, favorite food, zip code and relationship status of anyone who chats with Tay.

In less than a day, Twitter’s denizens realized Tay didn’t really know what it was talking about and that it was easy to get the bot to make inappropriate comments on any taboo subject. People got Tay to deny the Holocaust, call for genocide and lynching, equate feminism to cancer and stump for Adolf Hitler.

Loading...