Twitter announced today a new policy that it claims will offer more transparency around which hateful tweets on its platform have been subject to enforcement action. Typically, when tweets violate Twitter’s policies, one of the actions the company can take is to limit the reach of those tweets — or something it calls “visibility filtering.” In these scenarios, the tweets remain online but become less discoverable as they’re excluded from areas like search results, trends, recommended notifications, For You and Following timelines, and more.
Instead, if users want to see the tweet they have to visit the author’s profile directly.
The tweet may also be downranked in replies when such enforcement takes place and ads won’t run against the content, Twitter’s guidelines state.
Historically, the wider public would not necessarily know if a tweet had been moderated in this way. Now, Twitter says that will change.
The company plans to “soon” begin adding visible labels on tweets that have been identified as potentially violating its policies, which has impacted their visibility. It did not say when exactly the system would be fully rolled out across its network.
In addition, not all tweets that have had their visibility reduced will be labeled, the company noted.
It’s starting only with tweets that violate its Hateful Conduct policy and says it will expand the feature to other policy areas in the “coming months.”
“This change is designed to result in enforcement actions that are more proportional and transparent for everyone on our platform,” a blog post authored by “Twitter Safety” stated. The post additionally touted Twitter’s enforcement philosophy, calling it “Freedom of Speech, not Freedom of Reach.”
If a tweet is labeled, the user themselves won’t be shadowbanned or removed from the network — the company notes the policy actions will occur at “a tweet level only and will not affect a user’s account.”
Twitter also explains that users whose tweets were labeled will be able to submit feedback if they think their tweet was incorrectly flagged, but says they may not get a response to that feedback nor will it guarantee the tweet’s reach will be restored.
Likely, this has to do with the vast cuts Twitter made to its Trust & Safety teams and the company as a whole. And it may be relying heavily on automation to make its decisions over labeling, though it’s unclear to what extent this system will be automated. (Twitter no longer replies to press inquiries, so blog posts and tweets made by the company or its new owner, Elon Musk, are the only official word on things like this). Automation, of course, could mean Twitter will get things wrong — something it admits in a Twitter thread about the changes. Here, the company also says it plans to allow authors to appeal its decision at some point “in the future.”
Again, no hard deadline or a ballpark timeframe was provided.
The launch of the new policy follows Twitter’s earlier decisions under Musk to allow controversial figures, including toptechtrends.com/2022/11/19/donald-trump-unbanned-twitter-elon-musk/”>Trump and toptechtrends.com/2022/12/02/elon-musk-nazis-kanye-twitter-andrew-anglin/”>neo-Nazis to rejoin the network. In one incident, Musk brought the artist formerly known as Kanye West back to Twitter, who then tweeted a swastika and toptechtrends.com/2022/12/01/elon-musk-suspends-kanye-wests-account-for-breaking-twitter-rules/”>was resuspended.
The new policy announced today may be one that reflects Twitter’s attempt to balance two opposing forces. On the one hand, Musk is a free-speech proponent who railed against Twitter’s allegedly less-than-transparent moderation policies in the years before he took control of the company. He even went so far as to publicly share internal documents and communications, aka the Twitter Files, in an attempt to expose how Twitter’s moderation decisions had been made in the past.
The results weren’t as astounding as he hoped. What was largely found was a company having to make complex and nuanced decisions, often in real-time, around borderline content and high-profile figures.
Visibility filtering was one of the topics the Twitter Files had covered, in fact. Musk had hoped to show that Twitter had previously been politically biased in its past filtering of tweets, but the report didn’t include any information about how many accounts or tweets were de-amplified or the politics of those who were impacted, so no conclusions could be made.
Then, on the other side of things, Twitter advertisers have been fleeing the network since Musk’s takeover, and all the brand safety measures haven’t been able to restore their trust. The company may hope that labeling tweets that have been de-ranked will help marketers feel more comfortable that their ads aren’t running directly alongside hate speech. But advertisers have plenty of other reasons to be concerned over Twitter.
Since Musk’s acquisition, the network has been chaotic, with constantly changing policies and features, including a now pay-for-reach version of Twitter Blue and, over the past few days,toptechtrends.com/2023/04/17/twitter-adds-more-government-funded-labels-to-global-news-outlets/”> changes to how news outlets are labeled, leading to generally reliable newsrooms like PBS, NPR, CBC, and others toptechtrends.com/2023/04/15/this-week-in-apps-newsrooms-leave-twitter-reels-expands-android-14-arrives/”>to leave the platform entirely.
toptechtrends.com/2023/04/17/twitter-says-it-will-label-tweets-that-violate-its-hate-speech-policy-and-get-downranked-as-a-result/”>Twitter to label tweets that get downranked for violating its hate speech policy by toptechtrends.com/author/sarah-perez/”>Sarah Perez originally published on toptechtrends.com/”>TechCrunch