In a bombshell new article titled “Twitter Artificial Intelligence,” author Kris Ruby discusses the implications of using artificial intelligence (AI) to manage and censor content on Twitter. The article sheds light on the challenges and potential biases resulting from the platform’s reliance on AI-driven algorithms for content moderation.
Ruby explains that Twitter, like many social media platforms, uses AI algorithms to filter, prioritize, and censor content due to the sheer volume of user-generated posts. The biases in AI-driven censorship may stem from the development process, including biased training data and human influence during the AI training phase.
“Internal Twitter documents I obtained provide a window into the scope of the platform’s reliance on AI in the realm of political content moderation, meaning what phrases or entities were deemed misinformation and were later flagged for manual review. The documents show that between Sept 13, 2021-Nov 12, 2022, when political unrest was taking place, Twitter was flagging a host of phrases including the American flag emoji, MyPillow, Trump, and election fraud,” says Ruby
She emphasizes that although AI plays a crucial role in content moderation, it is essential to strike a balance between AI-driven censorship and human oversight to maintain the democratic values of free speech and open discourse. To address these concerns, the article suggests that social media platforms like Twitter should invest in refining their AI systems, increasing transparency, and incorporating diverse human oversight to counterbalance the limitations of AI technology.
In the article below, we’ll use Ruby’s insight (along with what we’ve learned from the Twitter Files and our recently-published concerns about AI) to take a long, hard look at the impact of AI in content moderation, the potential for corruption in algorithmic censorship, and the significance of machine learning on free speech and democracy. We’ll also take a look at solutions to AI-driven censorship along with potential paths forward that balance an existential moderation dilemma with our fundamental right to free speech and the vital need for open discourse in the world.
The Inception of AI-driven Censorship on Twitter
As social media platforms like Twitter become increasingly influential in shaping public discourse, the role of AI in content moderation has become a hot-button issue. According to the article by Ruby Media Group, Twitter’s AI algorithms have been designed to identify and censor content that goes against their platform’s rules and regulations. While the objective of these algorithms is to maintain a safe and respectful environment, they have also given rise to concerns about censorship bias.
At the core of this issue is the outsourced AI learning process, which relies on human trainers to teach the algorithms how to recognize and flag content. The trainers use a set of guidelines provided by Twitter, but the company itself admits that these guidelines are “subjective.” As a result, the AI may inadvertently learn biases from the trainers, leading to a biased censorship landscape online.
This situation becomes even more complex when considering the global nature of social media platforms. Trainers from different countries and cultures may have their own biases, further entrenching the problem. As we continue to explore the ramifications of AI-driven censorship on Twitter, it is crucial to examine the root causes of this bias and seek ways to create a more inclusive online environment for all users.
Unraveling the Bias in AI Learning Processes
The biases in AI-driven censorship on platforms like Twitter can be traced back to the learning processes of the AI algorithms themselves. According to the article, these algorithms rely on human trainers to teach them how to recognize and flag content that goes against the platform’s rules and regulations. However, the inherent subjectivity of these guidelines means that trainers may inadvertently pass on their own biases to the AI systems.
A key aspect of the AI learning process is the use of training data, which consists of a large number of examples that the AI system can learn from. In the case of Twitter’s content moderation algorithms, this data includes examples of tweets that have been flagged for violating the platform’s rules. The human trainers evaluate these examples and decide whether or not they should be censored, based on the guidelines provided by Twitter.
Unfortunately, the guidelines themselves can be vague or open to interpretation, which leaves room for trainers’ personal biases to influence their decisions. As the Ruby Media Group article highlights, the outsourcing of AI training can exacerbate this issue, since trainers from different countries and cultures may have their own biases. For example, trainers might have different opinions on what constitutes offensive language or hate speech, depending on their cultural backgrounds, personal experiences, or political affiliations.
Furthermore, since the AI system learns from these decisions, it may end up adopting the same biases, leading to skewed censorship on the platform. As a result, certain types of content or viewpoints might be disproportionately censored, while others remain unaddressed. This poses a significant challenge for social media platforms that aim to promote healthy discourse and uphold free speech, as they must find a way to balance these ideals with the need to maintain a safe and respectful online environment.
The Perils of Outsourced AI Training Data
Outsourcing AI training data is a common practice for tech companies like Twitter, as it allows them to access a vast pool of resources and expertise. However, as Ruby’s article points out, this approach also comes with its own set of challenges, particularly when it comes to ensuring the neutrality and fairness of the AI algorithms.
One major concern is the potential for cultural biases to be introduced by human trainers from different countries and backgrounds. As they evaluate content according to the platform’s guidelines, they may apply their own cultural norms and values, which could lead to inconsistent or biased censorship. For example, a trainer from a country with strict laws against blasphemy might be more likely to flag content that critiques religion, while a trainer from a more secular country might not consider the same content to be offensive. As AI systems learn from these decisions, they may end up adopting these cultural biases, resulting in censorship that disproportionately targets certain groups or perspectives.
Another issue highlighted by the Ruby Media Group article is the potential for political biases to influence AI training. In some cases, human trainers may have their own political affiliations or agendas, which could impact their decisions when evaluating content. As a result, AI algorithms might be more likely to censor content that goes against the trainers’ political beliefs, leading to an unbalanced representation of ideas on the platform.
Ultimately, the outsourced AI training data process can inadvertently perpetuate biases and further entrench the problems associated with censorship. To address these issues, social media platforms must implement more rigorous oversight and quality control measures, as well as strive for greater transparency in their AI learning processes. By acknowledging and addressing the potential biases in AI-driven censorship, platforms like Twitter can work towards creating a more inclusive and diverse online environment.
The Real-world Consequences of Biased Censorship Algorithms
We’ve shown how biases in AI-driven censorship on platforms like Twitter can have far-reaching implications. By inadvertently perpetuating cultural, political, and personal biases, these algorithms can skew the online landscape and stifle healthy discourse. In this part, we will delve into the real-world consequences of biased censorship algorithms.
1 | Suppression of marginalized voices: The biases present in AI-driven censorship can disproportionately affect marginalized groups and minority perspectives. By censoring content that falls outside of the mainstream or challenges the status quo, these algorithms can contribute to the silencing of important discussions about social issues, inequality, and injustice.
2 | Erosion of trust in social media platforms: As users become aware of the biases present in AI-driven censorship, they may lose trust in the platform’s ability to maintain a fair and open environment. This could lead to decreased engagement, loss of users, and a decline in the platform’s overall credibility as a source of information and discourse.
3 | Strengthening of echo chambers: Biased censorship algorithms can contribute to the formation of online echo chambers, where users are only exposed to content that aligns with their existing beliefs and perspectives. By reinforcing these echo chambers, these algorithms can make it more difficult for users to encounter diverse viewpoints and engage in constructive dialogue.
4 | Manipulation of public opinion: The biases present in AI-driven censorship can be exploited by bad actors seeking to manipulate public opinion. By understanding how the algorithms function and what content is more likely to be censored, these individuals or groups can craft messages that avoid detection and spread disinformation or propaganda.
5 | Silencing political dissent: Biased censorship algorithms can also be used to suppress political dissent and maintain the status quo. When algorithms disproportionately censor content critical of certain political ideologies or governments, they inadvertently contribute to the suppression of political opposition, stifling democratic values and the free exchange of ideas. This can result in an online environment that fails to reflect the full spectrum of political opinions and prevents users from engaging in robust, open debate.
To mitigate the real-world consequences of biased censorship algorithms, social media platforms must commit to greater transparency and accountability in their AI learning processes. This includes implementing rigorous oversight and quality control measures, as well as engaging with diverse stakeholders to ensure that their algorithms are as unbiased and fair as possible. By addressing these concerns, platforms like Twitter can foster a more inclusive and balanced online environment that supports the free exchange of ideas and the democratic values they seek to uphold.
Tune in for Part 2, where we’ll discuss echo chambers, the hidden impact of algorithmic moderation, and some solutions to AI censorship.
Karin Zukowski says
(super) dear Bollingers! Craved watching every sec’d your 7 docus on cancer etc.Am 83,with fanatical interest on nutr.& just got most serious again re. veggie fermt.Just compl 2 gl.for young friend & family. Am not able to buy your series,HOPE many do+&LEARN. I bought 2 of your eye opening previously, read top to bottom, gave away to 2 intelligent friends yrs.ago. Bought ‘cancer step outs…’fasc’d reading -still need to send you by mail my image insp’d by anatomy class, RCAD, Sarasota,1990s, titled ‘Brain on Legs’- was sooo surprised that you used this phrase in foreword of your book ‘the med.mafia !!! I had communicated with ‘Eric” some time ago, but had unexpected fall over ‘stupid FL sunken Livingroom’ – . I need new ink cartr. to type my pers.letter to u! Very sincerely, Karin (pl. call me if possible)
To two courageous and wonderful people.
God bless you and use you in amazing ways. I just read part of the article on AI which scares me, I don’t want to touch it or have it touch my life.
And now the number of people in my family dead with cancer is 8, 1 more now gone. Well they say, she lived to be 99 before contracting cancer, but she may have lived much longer without it.
Keep up the good work, I am rooting for you . May the Lord Jesus be your focus, and your goal be to honor and give glory to God and He will continue to bless you. Joshua 1:8-9 and Prov. 3:5-8 health to your navel and marrow to your bones. Good things to have for your health, but you need to meet the conditions. Jacky Manchester in Texas