Convergence and New Media: The profitable business of hate speech

While social media is a common method of communication, it is also for good reason, largely unregulated. The negative consequence however, is that sensitive, illegal or objectionable content is also posted on such platforms, which have also become unwitting vehicles with which to disseminate abuse and propaganda. The UK House of Commons Home Affairs Committee published a report in May 2017 entitled “Hate crime: abuse, hate and extremism online” which criticised social media giants YouTube, Facebook and Twitter for their failure to appropriately address hate speech. With the growing monetisation of social media through advertising revenue, there is also potential for both the platform and extremists to profit from the publication of hate speech online. 

17 May 2017 3 min read Technology, Media & Telecommunications Alert Article

Each social media platform has acceptable use policies or user community guidelines which prohibit objectionable content including hate speech. These rules attempt to regulate such content by authorising its removal. Yet, as social media platforms are reliant on a “peer review system” of reporting or flagging objectionable content by other users, its “removal” halts its perpetuation, rather than prevents its publication. When such content is reported, it is reviewed by the platform
and if necessary, removed. This approach is proving to be inadequate as it cannot (nor should it) actively analyse postings of all users on the platform. The delay between publication of the inappropriate content, reporting by another user, the review and its ultimate deletion, however, means that the harm has often been done prior to the removal of the content. Consider, for example, Donald Trump’s infamous Facebook video in which he proposed the barring of Muslims from entering the USA which remained on Facebook despite its violation of Facebook’s user policies. 

The issue here however, is not one of censorship. Revenue on social media is derived from advertising, not from account registration. Site traffic and driving traffic matter. It matters for advertisers and for social media company revenue. Technology enables companies to target specific demographics with adverts that “follow” users based on their information, preferences and search strings, resulting in brands inadvertently showing up alongside questionable content. Facebook and YouTube have recently been criticised for failing to prevent campaigns by for example, Nissan, L’Oréal and Sainsbury, from appearing alongside videos amounting to hate speech. Apart from brand reputational risk, this has the unintended effect of the platform deriving revenue from and assisting extremists in their publication of hate speech.

While traditional broadcasting is universally held to strict regulation, social media platforms are not, nor arguably, can, or should they be. User generated content changes the rules of the game and the early day debates on regulation have moved on. Yet, a balance is required. The UK proposals suggest that social media platforms need to meet a high public interest and safety standard and should attract liability for the failure of such platforms to expeditiously remove content propagating hate speech. The EU’s Code of Conduct requires social media companies to review complaints within 24 hours and remove content where necessary, although there is no penalty for failure to do so. The German Justice Ministry has proposed that social media companies publish quarterly reports on complaints and fines of up to €50 million for failure to comply with the Code plus fines of up to €5 million for employees personally tasked with handling complaints who fail to do so. 

Although far from passing constitutional muster, the Draft Prevention and Combating of Hate Crimes and Hate Speech Bill aims to prevent hate speech in South Africa and criminalises the intentional communication (including electronic communication) of hatred, threats, abuse or incitement to do harm or violence, based on 17 protected grounds. The Bill’s excessively overbroad ambit which includes the “making available” of such communication is sufficient to attract liability for social media platforms (and every other conceivable communication network and provider) but it will require considerable refinement to become useful legislation. Notwithstanding challenges in defining hate speech and balancing constitutional rights and freedoms, growing calls for liability for social media platforms articulate their responsibility to protect users from such content. At the very least, these reforms suggest that the platforms should certainly not be profiting from a failure to do so. 

The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2024 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us