Twitter Says it may Warn Users about Deepfakes--But Won't Remove Them
upstart writes in with a submission, via IRC, for SoyCow1337.
Twitter says it may warn users about deepfakes-but won't remove them:
The news: Twitter has drafted a deepfake policy that would warn users about synthetic or manipulated media, but not remove it. Specifically, it says it would place a notice next to tweets that contain deepfakes, warn people before they share or like tweets that include deepfakes, or add a link to a news story or Twitter Moment explaining that it isn't real. Twitter has said it may remove deepfakes that could threaten someone's physical safety or lead to serious harm. People have until November 27 to give Twitter feedback on the proposals.
[...]A real threat?:The most notorious political deepfakes so far either have not been deepfakes (see the Nancy Pelosi video released in May) or have been created by people warning about deepfakes, rather than any bad actors themselves.
[...] The real problem: There is no denying that deepfakes pose a significant new threat. But so far, they're mostly a threat to women, particularly famous actors and musicians. A recent report found that 96% of deepfakes are porn, virtually always created without the consent of the person depicted. These would already break Twitter's existing rules and be removed.
Read more of this story at SoylentNews.