After a tumultuous week where Twitter censored President Donald Trump’s account twice, the platform has decided to come forward to explain its decisions.
“We want to take a step back,” said Twitter Safety on Twitter, “and share the principles we use to empower healthy public conversation through our product, policies, and enforcement.” In the tweet thread, Twitter claimed that its focus was “on providing context, not fact-checking.” However, the “context” provided for Trump’s tweet about mail-in ballots did indeed fact-check his statement, calling it an “unsubstantiated claim.”
Twitter argued that when it “label[s] Tweets” it links to other tweets that have three qualities: “[f]actual statements,” “counterpoint opinions & perspectives” and “ongoing public conversation around the issue.” But when Twitter labelled Trump’s tweet, it linked to a Twitter Event, complete with a lengthy paragraph that cited CNN, The Washington Post, “and others.”
In response to Twitter’s action, Trump wrote an executive order that planned to use the federal government to remove Section 230 protections from tech companies that do not act as neutral platforms. Initially, Twitter reacted by saying, “This EO is a reactionary and politicized approach to a landmark law.”
In the current explanation for labelling the president’s tweets (and preventing users from commenting on them), Twitter Safety said, “We are NOT attempting to address all misinformation. Instead, we prioritize based on the highest potential for harm, focusing on manipulated media, civic integrity and COVID-19. Likelihood, severity and type of potential harm — along with reach and scale — factor into this.”
There was no explanation for why Twitter continued to allow the Supreme Leader of Iran Ali Khamenei to tweet on the platform, making threats towards Israel which included calling it, “a deadly cancerous growth and a detriment to this region.”
According to Twitter, the “health principles” that influence the platform’s decisions are five-fold: decreasing “potential for likely harm,” decreasing “harmful bias & incentives,” decreasing “reliance on content removal,” increasing “diverse perspectives” and increasing “public accountability.”
Read the Original Article Here