Technology is constantly changing in the world we live in today. Social Media Corporations have started to implement A.I. bots to scan content for them on a daily basis rather than a human. Although this seems very innovative and less time consuming, is this the best way to tackle offensive speech on social media platforms? Bots have the capability to find keywords, phrases, and images and can instantly decide whether or not the content follows or violates a social media platform’s guideline. But technology can slip up from time to time and flag the wrong content. Throughout this post, I will be looking at how A.I. bots have transformed censorship on various social platforms, the effects it’s had on everyday users, and the kinks that need to be worked out.
A.I. can process information far quicker than a human can. Algorithms are put in place to spot content that doesn’t follow community guidelines. This has taken on a form of censorship that has many people questioning as to how reliable these bots actually are.
Facebook went on to elaborate that the photo was not approved because it violated their guidelines. Specifically, the content of the photo was deemed offensive because it showed a fully naked body and the main focus of it was on certain body parts. Elisa Barbari, who posted the photo, quickly took to Facebook to describe how unhappy she was by updating her Facebook cover to say, “Yes to Neptune, no to Censorship.” While A.I. may have thought that this image was offensive, I believe it wasn’t. The statue is a historical landmark and work of art, it shouldn’t have been taken as something that is offensive.
Additionally, members of the LGBTQ community recognized that there was a lack of search results for the terms #gay and #bisexual on Twitter. This led to them feeling as though they were being censored.
Twitter quickly addressed that this was a result of a technical error on the platform. They went on to explain that the terms were being categorized as sensitive material and that is what produced little to no results when a search was made for the terms gay or bisexual. They have several lists that mark content as explicit, but an error was present that allowed those words to be flagged as sensitive. Twitter confirmed that their lists were outdated and removed the terms from that list. Following that, further evaluation was done to update their algorithms.
A.I. can work on its own to flag content but also has the ability to spot certain posts that may require a human reviewer to step in. Facebook has designed A.I. to recognize repeated suicidal tendencies on their platform.
The algorithms created use pattern recognition to determine which posts need to be flagged. A.I. can also detect certain signals displayed in live videos and pre-recorded videos. Posts are prioritized in order of importance so human reviewers can see which instances need to be viewed first. From there, the situation can be assessed, and it is determined if first responders need to be contacted. As a result, in 2017 around 100 calls were placed to emergency responders in a months’ time.
A.I. has been found as biased in a sense because of the ways it learns to spot content. Bots are essentially trained by watching others complete tasks and by getting the results of the tasks. The human playing a role in training is what gives the bias to the bot. The bot feeds in to what it’s learning from the human and will repeat its actions accordingly. The way bots react to content is based off of how they were trained and how they can collect data. This raises a question as to how effectively these bots are working if they’re biased to a certain type of content.
I believe that A.I. is working in some instances on social media, but it still has a long way to go and improvements to be made. I think A.I. is a much more efficient way to scan and evaluate content on a regular basis, but modifications need to be made to their specific algorithms to spot the correct content that should be censored. Algorithms also need to be constantly updated on the latest trends and news, so they can identify the right and wrong things to flag. I also think that integrating human reviewers is also a crucial role that still needs to be maintained on social media. There are still slip ups from time to time but A.I. is constantly improving and evolving and will continue to be a regularly used tool for social media monitoring in years to come.