In May 2018, Twitter announced a new algorithm tweak which basically shadow bans anyone their “machine” determines to be a “troll,” by using “behavioral signals” to determine if anyone is distorting or detracting from public conversation. Twitter’s vice president of trust and safety, Del Harvey, and director of product management David Gasca, on May 15, 2018, explained their “New Approach,” via the Twitter blog, after explaining the new algorithm was aimed at addressing the problem of “trolls.”
Today, we use policies, human review processes, and machine learning to help us determine how Tweets are organized and presented in communal places like conversations and search. Now, we’re tackling issues of behaviors that distort and detract from the public conversation in those areas by integrating new behavioral signals into how Tweets are presented. By using new tools to address this conduct from a behavioral perspective, we’re able to improve the health of the conversation, and everyone’s experience on Twitter, without waiting for people who use Twitter to report potential issues to us.
There are many new signals we’re taking in, most of which are not visible externally. Just a few examples include if an account has not confirmed their email address, if the same person signs up for multiple accounts simultaneously, accountsthat repeatedly Tweet and mention accounts that don’t follow them, or behavior that might indicate a coordinated attack. We’re also looking at how accounts are connected to those that violate our rules and how they interact with each other.
These signals will now be considered in how we organize and present content in communal areas like conversation and search. Because this content doesn’t violate our policies, it will remain on Twitter, and will be available if you click on “Show more replies” or choose to see everything in your search setting. The result is that people contributing to the healthy conversation will be more visible in conversations and search.
In other words Twitter went from denying they shadow ban people to announcing they not only do it, but they are tweaking their algorithms to kick the practice into high gear, by making themselves the arbiter of what is “healthy” in a conversation, who is and who isn’t a “troll,” and censoring and hiding the content of a user if they “interact” with someone their “machine” has already determined to be someone that behaves badly.
First off, there is a huge difference between a “troll,” and someone that simply disagrees and wants to debate a topic. A troll does nothing but attack others, is incapable of being civil, never addresses the topic being discussed, and consistently attempts to sow discord among discussion participants. What Twitter is doing is setting up their platform to only allow conversations they agree with, while hiding anything they don’t like by labeling it “unhealthy.”
Twitter CEO Jack Dorsey took to the platform to claim “Our ultimate goal is to encourage more free and open conversation.” He doesn’t even appear to understand the irony of claiming they want more free and open conversation while defending the practice of censoring that “free and open conversation.” Dorsey also claims they are doing this to “reduce the ability to game and skew our systems.”
The reaction to his statement on Twitter shows exactly what people think of his attempts to defend outright censorship, with one user responding to him with “Better to encourage hearing dissenting voices, which is supposed to encourage individual free thinking instead of “group think.”
Another user highlights how conservatives are already targeted by Twitter, stating “It only works if it is applied fairly but any program is only as good as those who do the programming. So far, 2 months in, your system is very biased toward conservatives, GOP and Trump supporters. You won’t even remove a page that shows child-porn.”
That is the problem right there in a nutshell, as Project Veritas has already published multiple under cover videos where Twitter employees admitted to “teaching” their machines to label users that talk about God, guns, and America as “bots.”
In fact, Dorsey admitted on tape that Twitter is so liberal that its conservative employees ‘Don’t Feel Safe’!
Sometimes Twitter takes punitive action without notifying anyone, including the subject. The practice of “shadowbanning” involves quietly hiding tweets in certain regions, with some effort put into preventing the target from realizing his posts have been deleted or made invisible to many other users.
In a similar vein, Twitter has been known to use automated filters that will render tweets containing “abusive” language invisible. The user is given no indication that anything is wrong – it looks like he’s successfully tweeted a message, but no one else ever sees it. After the Paris terror attack, media organizations noticed that automated filters were blocking images and keywords deemed “sensitive,” including gruesome photos and keywords thought to be used by ISIS supporters.
Such tools are supposed to cut down on harassment and abusive tweets, with the understanding that review by human administrators is impossible – Twitter processes around 500 million messages per day. However, the potential for tweaking these automatic filters to expand the definition of “abusive language” and turn them into tools of ideological oppression is clear.
Free-speech advocates have worried that Twitter’s decision to work with certain activist groups to crack down on “harassment” can give those groups an unhealthy degree of influence over the platform, as they file a high volume of dubious harassment charges to silence people they don’t like. The results can be disturbingly similar to the “safe space” and crybully censorship sweeping college campuses, in which the definition of harassment is slowly expanded to include much more than vile slurs, clear-cut attempts at intimidation, and violent threats. Complaining about bullies turns out to be a very effective means of bullying people.
Some Twitter censorship campaigns have been conducted by bypassing human administrators and deliberately abusing automated response systems. A few years ago, there was a rash of incidents known as “spam-flagging,” in which organized mobs of Twitter users were blowing targeted individuals off the service by marking a large number of their messages as “spam,” or unsolicited advertising. Twitter had introduced a “Block and Report Spam” feature to crack down on bot programs that were spewing ad messages into users’ timelines. It didn’t take long for activist mobs – usually left-wing mobs targeting conservatives – to realize they could use this reporting feature to lock the accounts of their enemies.
Twitter, along with many other popular social media platforms, has been criticized for being too willing to cooperate with government demands for censorship.
The French government’s desire to suppress certain material after the Paris terror attack, as mentioned above, is one example, but more authoritarian governments have even more aggressive censorship demands. One controversial example was Twitter agreeing to Pakistan’s demands to suppress “blasphemous” content in summer 2014.
That’s a very heavy censorship hand for a company whose CEO declared, just five years ago, “We’re the free speech wing of the free speech party.” It’s possible to have robust free speech while policing the most obviously abusive, obscene, or threatening language, but Twitter is looking more and more like a campus “safe space,” with the attendant abuses… and even less willingness on the part of administrators to justify their actions.
Twitter’s censorship of conservatives and alternative media has included the suppression of pro-life speech while allowing… almost advocating violence on women. Alternative Media icon Alex Jones was permanently banned from the Twitter platform in September 2018.
Twitter is also greatly interfering with the 2020 election process. In May 2020, Twitter added “fact check” labels to tweets from President Trump warning voters about the potential for fraud in mail-in ballots. The decision came just days after a USPS mail carrier in West Virginia was charged with fraud for tampering with vote-by-mail requests.
During the Minneapolis riots, Twitter again censored one of the President’s tweets, falsely accusing him of “glorifying violence” over a tweet that linked rioting to gun violence. The month after, Twitter put a “public interest notice” on one of Trump’s tweets warning that violent rioters outside the White House would be met with force, accusing him of “threatening harm against an identifiable group.”
In July, Twitter took down a meme retweeted by the President in response to a copyright claim by the New York Times, despite the fact that the image was substantially edited. In August 2020, Twitter took down a Fox & Friends interview with the President, accusing Trump of spreading “misinformation” about COVID-19. Also in August Twitter hid (claiming it was “misinformation”) another tweet by the President after he tweeted that mail drop boxes are a “voter security disaster,” that allow people to vote multiple times. The president also warned that the mailboxes are not sanitized to prevent the spread of the Chinese virus.1
Jim Hanson, former member of the U.S. Army Special Forces, discusses why he believes that Twitter is biased against Republicans and conservatives. He says,
They implemented what they call a quality filter that allows a liberal mob to attack conservative accounts using some of the tools Twitter built them – including these mass block lists where liberals have gathered hundreds of thousands of conservative accounts and Twitter built them a tool where with one push of a button they can block those accounts. And Twitter counts that as a black mark against the account. They’re using this as kind of a hecklers veto and its affecting a lot of conservatives. It’s affecting Republican politicians… Their accounts are not being seen as much as their democrat opponents.
Terms of Service Explicitly Allows Pedophiles to Discuss ‘Attraction Towards Minors’ on Their Platform
Social media giant Twitter quietly amended their terms of service in Nov 2019 to allow for “discussions related to… attraction towards minors” to be allowed on their platform.
“Discussions related to child sexual exploitation as a phenomenon or attraction towards minors are permitted, provided they don’t promote or glorify child sexual exploitation in any way,” reads Twitter’s terms of service.
Twitter also noted that they would allow for nude depictions of children on their platform in certain instances.
Twitter’s New Fact-Checking Tool:
Chronological History of Events Related to Twitter