Algorithmic Censorship – the future of censorship?

Artificial intelligence is rapidly becoming used more frequently in the world of business, but we definitely haven’t seen the end of it, at this point in time – really it’s just the beginning. AI is quickly starting to replace humans in the way that it can react to data faster and more efficiently than most human brains can.

With that said, it’s not to go without saying that AI will aid in the censorship of social media. AI is typically used by social media platforms due to its ability to move quickly and take down offensive or flagged posts before it can receive heavy publicity.

With research, I found that many popular social media platforms are applying AI practices in order to censor their user’s posts. In this blog post, I will be addressing Facebook, YouTube, and Twitter’s use of AI in content censorship.

Facebook uses AI more than one may think, whether it be neural networks that learn to tag in the facial recognition software that can tag friends in photographs posted on the site. Taking that one step further, in the purpose of censorship, the company decided to adopt AI to their Facebook Messenger feature that leads to suicide prevention. Reported by Emerj.com, product manager representative at Facebook, Vanessa Callison-Burch, stated, “the AI tool was configured using data from anonymous historical Facebook posts and Facebook live videos with an underlying layer of pattern recognition to predict when some may be expressing thoughts of suicide or self harm.” From there, the system will ‘red flag’ the post of Facebook live broadcast using a ‘trigger value’, and send it off to a human in-house reviewer that will make the final call on whether first responders will be called upon. Below is a video that explains this software that Facebook has in place.

In addition to this, Facebook is also more closely monitoring offensive material in the form of analyzing language, images, and video content that violates the platform’s terms of service. Termed as Facebook Rosetta, the multi-layer model filters each identified word that receives a recognition score determining the level of offensiveness the message may contain. This will monitor the millions of posted contents that are uploaded onto the site each day, and using non-human bots, with human-like characteristics, the software can take the heavy workload of human moderators working for the company.

Moving on to our next platform, YouTube, has been working to take down sensitive content using AI. Although the site has terms and conditions that combat particular content, terrorism-related content tends to slip through the cracks at times. Through a multi-pronged approach to tackling the spread of controversial, extremist content, the platform is going greater lengths. Using machine learning, there are tools in place that will flag negative ‘violent extremism videos’ and send them off for verification by a human team of reviewers. According to Emerj.com, “The AI was able to take down 83 percent of violent extremist videos in September 2017 before its team has reviewed each upload.” That is a huge step in the right direction of providing a solution for social media censorship. It is difficult for social media platforms to censor every bit of content that is posted by their users, but with the use of AI, it becomes a little simpler. Recorded by Samuel Gibbs for The Guardian, a YouTube spokesperson also stated, “Over 75% of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.” This artificial intelligence technology can target illicit content faster than any human because it is constantly learning, and improving over time. With the volume of content posted to the site everyday, 400 hours of content every minute to be exact, there is no telling that it would be difficult for a group of humans to manage. Concluding that YouTube is among the largest platforms for video hosting services, it is important that extremist and terrorist-related content not end up on the platform that can ultimately disturb the site’s visitors.

Image result for twitter censorship artificial intelligence

Source

Finally,  we will address what Twitter is doing in order to advance their user experience by censoring the content that their various users post. Twitter’s focus in censorship is to combat hate speech being posted to their site. Teamed up with IBM Watson, Twitter is using Watson’s best practices in order to quickly pick up on abuse patterns and immediately put a stop to negative behaviors before they begin. IBM Watson‘s industry solutions are defined as a “suite of enterprise-ready AI services, applications, and tooling.” The site plans to control those who deliver hate speech on Twitter by allowing only the tweeter’s direct followers to see their tweets or, in extreme cases, block them from the site entirely. Twitter also uses ‘algorithmic sorting’ to keep users on the site longer, by ensuring the likelihood that content on their news-feed will not offend or upset them. The sorting algorithms are based on who you follow, what you tweet, and vice versa. The platform will monitor the behaviors of a particular user’s tweet and the account can get flagged if there seems to be a link to the reactions the tweet receives. Overall, Twitter’s algorithmic censorship is key to developing a positive experience for each individual user, by evaluating the accounts that need to get flagged.

There are positives and negatives to this proposed solution of algorithmic censorship that is on the rise. As I had initially mentioned, it is a major positive that non-human bots can work quicker and more efficiently than humans can, due to machine learning tools. Offensive content can be taken down or flagged by an AI tool, before it blows up in the public eye. Another benefit to this type of censorship lies in the fact that the platforms have tools in place that will eliminate the ability for people to be recommended offensive content on YouTube, or the action to occur that you’re scrolling down your NewsFeed on Twitter or Facebook, and you see illicit content that upsets you.

Of course there’s a downside to every positive. There is a problem in the use of algorithms, in the alternative way that it is a positive, with the idea of AI technology not being backed by humans. AI might be smart, but platforms vary in what they find appropriate, and at some times can be too cautious in their monitoring by shutting down important conversations. In a particular case, Tumblr blocked LGBT+ content and regarded it as NSFW. Although this technology may promote a safer online community, it can also protect against learning opportunities that get taken down by algorithmic censors. Here’s a video of a concerned, and frustrated YouTuber who is experiencing how Google has begun to censor his channel.

 

2 thoughts on “Algorithmic Censorship – the future of censorship?”

Leave a comment