first draft
November 11, 2019

Twitter has some ideas for how to handle "deepfakes" on its platform, but it's open to suggestions.

Del Harvey, Twitter's vice president of trust and safety, in a Monday blog post detailed a draft of a new Twitter policy on "synthetic and manipulated media," defining this as "any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning." Twitter may put a notice next to tweets that share such manipulated media, warn users before they share or like them, and provide a link so users can read more about why the given media is believed to be synthetic or manipulated, she said.

Twitter could also remove such content if it's "misleading and could threaten someone's physical safety or lead to other serious harm," Harvey said. The company, however, is soliciting feedback on this policy, with a survey asking whether altered photos and videos should remain on the platform as long as they don't "directly cause physical harm" or if they should not be allowed regardless of whether they do. The survey additionally asks whether misleading photos and videos should be left online with a warning label, should neither be removed nor given a warning label, or be removed "even if it puts the responsibility on Twitter to decide."

Social media companies like Twitter have been facing increased pressure, including from lawmakers, to crack down on manipulated content, especially after a video spread online earlier this year of House Speaker Nancy Pelosi (D-Calif.) that was doctored to make it seem like she was slurring her words. At the time, an expert told The New York Times, "There is no way back; the Pandora's box is opened." Brendan Morrow

See More Speed Reads