Policing Content: Whose Job is it anyway?

In the recent days with major content and data breaches on Facebook, the US Senate took a tough view on major social media companies. The question is if enough is being done to regulate and police content on social media platforms. With more than 2 billion active users, the key players in this field are Facebook, Twitter, Amazon, Google, Instagram and a few such others. It is up to these large companies to spend more and give more focus on content evaluation so that disruptive, harmful and criminal content stays away from their communities. These content can include things like hate speeches against minorities, nudity, child pornography, animal poaching, selling and promotion of drugs and non-prescription medication. All this falls into content that is a ‘no no’ with most communities.

We now have very effective AI tools that can monitor, track and remove harmful and damaging content of this kind. Content which threatens harm to others and promotes any criminal activity will be instantly removed by these AI tools. The user will be blocked or his account disabled. Beheading videos and snuff videos have also been removed by AI tools. Facebook has pledged to add another 20,000 content evaluators around the world who will cull out and disable this type of harmful content and propaganda. They have decided to get more local on this issue. They hired freelance content evaluators in Myanmar before the general elections in the country to counter hate speeches, false news and harmful propaganda. Facebook has always maintained that every country or region has their own social, economic and language quirks. So only the locals of that country can be the best in removing harmful content. After all, hate speech for one could be just a free speech for others. They are sorting these issues by going local. In keeping with their policy of improving and better defining what the community standards are, Facebook has come out with a new set of guidelines on what is harmful content.

They have also improved their appeal system. Now if you have your content removed, you can appeal against it and ask for the reason for its removal. This is a good thing as it gives writers, short filmmakers, web series makers and bloggers enough air to breathe and be creative with their stories. These guidelines act as their own censorship board and as any creator of content, you also have a right to appeal.

Amazon has similar policies to keep its community safe and feel protected. While it allows some sexual wellness products to be sold from its stores, it removes material that describes sexual acts explicitly or shows nude pictures or child pornography. Content threatening someone else or of a liable nature is a ‘no no’ and members of the community have an option to report a content if they feel it is against the guidelines of their website. In that case, the content will be re-examined and removed if it is so felt. Use of abusive language and profanity is also a ‘no no’ although you can get away with a few swear words like f**k, b***h, b*****d, b***s, d**k, p***k, a*****e etc. I have used them extensively in my travel stories and no one has reported them yet. So I feel it is ok with my current community. See, that’s the thing. There are so many versions and visions of what is the right community standard and that’s where the confusion starts as opinions are diverse on this with cultural biases.

On the other hand, Google is strict with account hijacking, impersonation or posting other people’s private information like bank details, phone numbers or credit card numbers. It will, however, let some type of nude pictures which fall under artistic value pass and let go. It will, however, penalise you if you distribute pornography and bank for child trafficking or other human trafficking activity. It could also report you to the police and have you permanently disengaged from its site. Action will also be taken against you if you are spamming, threatening and bullying other members of the community. In short, what is generally unacceptable social behaviour is also unacceptable on social media platforms. The issue is the policing and the actual removal of all the content that is of a harmful nature to others in the community. A post or a content piece can be graphically pornographic to me but can just be sensual in someone else’s eyes.

AI tools are getting smarter at this and are being used for this activity to cull up the internet with negativity, hate, crime, bullying, harm and fake propaganda. There are efforts being made to set higher standards and clearer guidelines so that people are able to handle the issues of harmful content and clean up their own community, a kind of self-censorship of sorts. On Facebook, if someone reports your content, you can send a direct message to that person and with his help, resolve the issue. I think we need more of that. Content showed must be removed by the communities themselves if deemed harmful to others. More self-policing, safer the neighbourhood.

But exceptions are to be made for creative thinkers, writers, artists and filmmakers. If the content is showcased in a tasteful way to promote something meaningful and for the betterment of the community as a whole, it should be allowed and seen with leniency so that free thought and creative expression can flourish within the communities and new ideas can always get a place to germinate.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: