Each social media platform has acceptable use policies or user community guidelines which prohibit objectionable content including hate speech. These rules attempt to regulate such content by authorising its removal. Yet, as social media platforms are reliant on a “peer review system” of reporting or flagging objectionable content by other users, its “removal” halts its perpetuation, rather than prevents its publication. When such content is reported, it is reviewed by the platform
and if necessary, removed. This approach is proving to be inadequate as it cannot (nor should it) actively analyse postings of all users on the platform. The delay between publication of the inappropriate content, reporting by another user, the review and its ultimate deletion, however, means that the harm has often been done prior to the removal of the content. Consider, for example, Donald Trump’s infamous Facebook video in which he proposed the barring of Muslims from entering the USA which remained on Facebook despite its violation of Facebook’s user policies.
The issue here however, is not one of censorship. Revenue on social media is derived from advertising, not from account registration. Site traffic and driving traffic matter. It matters for advertisers and for social media company revenue. Technology enables companies to target specific demographics with adverts that “follow” users based on their information, preferences and search strings, resulting in brands inadvertently showing up alongside questionable content. Facebook and YouTube have recently been criticised for failing to prevent campaigns by for example, Nissan, L’Oréal and Sainsbury, from appearing alongside videos amounting to hate speech. Apart from brand reputational risk, this has the unintended effect of the platform deriving revenue from and assisting extremists in their publication of hate speech.
While traditional broadcasting is universally held to strict regulation, social media platforms are not, nor arguably, can, or should they be. User generated content changes the rules of the game and the early day debates on regulation have moved on. Yet, a balance is required. The UK proposals suggest that social media platforms need to meet a high public interest and safety standard and should attract liability for the failure of such platforms to expeditiously remove content propagating hate speech. The EU’s Code of Conduct requires social media companies to review complaints within 24 hours and remove content where necessary, although there is no penalty for failure to do so. The German Justice Ministry has proposed that social media companies publish quarterly reports on complaints and fines of up to €50 million for failure to comply with the Code plus fines of up to €5 million for employees personally tasked with handling complaints who fail to do so.
Although far from passing constitutional muster, the Draft Prevention and Combating of Hate Crimes and Hate Speech Bill aims to prevent hate speech in South Africa and criminalises the intentional communication (including electronic communication) of hatred, threats, abuse or incitement to do harm or violence, based on 17 protected grounds. The Bill’s excessively overbroad ambit which includes the “making available” of such communication is sufficient to attract liability for social media platforms (and every other conceivable communication network and provider) but it will require considerable refinement to become useful legislation. Notwithstanding challenges in defining hate speech and balancing constitutional rights and freedoms, growing calls for liability for social media platforms articulate their responsibility to protect users from such content. At the very least, these reforms suggest that the platforms should certainly not be profiting from a failure to do so.