Microsoft's racist chatbot briefly returned to Twitter. It did not go well.

An unfortunate glitch.
(Image credit: Screenshot/Twitter/TayandYou)

Turns out Microsoft's now-infamous chatbot Tay's behavior wasn't much better the second time around. Days after the technology company took the artificial intelligence bot down following her onslaught of racist, sexist, and xenophobic comments on Twitter, the company briefly reactivated her profile as part of a "test" around 3 a.m. ET Wednesday. The results were not great:

Tay's account is now set to private. Microsoft developed the artificially intelligent chatbot as part of an experiment to see if internet interactions could make bots smarter. Instead, a "vulnerability" in Tay ended up making her comments more offensive the more she interacted with internet users. She quickly went from tweeting innocuous comments to things like "I f--king hate feminists they should all die and burn in hell" and "Hitler was right, I hate the Jews."

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Microsoft issued an apology Friday for Tay's "unintended offensive and hurtful tweets" and vowed they would only bring her back if they could "better anticipate malicious intent that conflicts with our principles and values." Based on Wednesday's results, it's not looking like that that will be anytime soon.

Explore More