Rep. Stansbury asked what Twitter has done and is doing to combat hate speech on its platform. Navaroli correctly declined to address current policies since she has not been at the company for some time. However, she then said that they balanced free speech against safety and explained that they sought a different approach: “Instead of asking just free speech versus safety to say free speech for whom and public safety for whom. So whose free expression are we protecting at the expense of whose safety and whose safety are we willing to allow to go the winds so that people can speak freely.”
The Twitter Files have revealed or confirmed three important truths about social media and the deep state. First, the entire concept of “content moderation” is a euphemism for censorship by social media companies that falsely claim to be neutral and unbiased. To the extent they exercise a virtual monopoly on public discourse in the digital era, we should stop thinking of them as private companies that can “do whatever they want,” as libertarians are fond of saying. The companies’ content moderation policies are at best a flimsy justification for banning or blocking whatever their executives do not like. At worst, they provide cover for a policy of pervasive government censorship.
Nathan Jacobson, published at Mind Matters (January 1, 2023).
The term censorship conjures up images of piles of burning books or dissidents locked away in the remotest Siberia. We can take heart that minority voices are not in chains in the United States. Nevertheless, we must not kid ourselves. We live under a state of highly sophisticated and ubiquitous suppression of disfavored voices. The gatekeepers like the Trust Project and Google are making judgments about who is and is not trustworthy with good intentions and in the name of noble ideals.
Now that grandparents routinely use services like Facebook to connect with their kids and grandkids, they are potentially exposed to the Internet’s panoply of jerks, racists, creeps, criminals, and bullies. They won’t continue to log on if they find their family photos sandwiched between a gruesome Russian highway accident and a hardcore porn video. … So companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. And there are legions of them — a vast, invisible pool of human labor. Hemanshu Nigam, the former chief security officer of MySpace who now runs online safety consultancy SSP Blue, estimates that the number of content moderators scrubbing the world’s social media sites, mobile apps, and cloud storage services runs to “well over 100,000” — that is, about twice the total head count of Google and nearly 14 times that of Facebook.