Content moderation offers little actual safety on Big Social Media
Briefly

Content moderation offers little actual safety on Big Social Media
""Meta's announcement to end third-party fact-checking raises significant questions about the essence of content moderation: Is it a protective measure or a censoring tool?""
""The very structure of content moderation seems more like a facade—an obligatory system that distracts from the profits and motivations behind social media platforms.""
In a discussion led by Jess Brough, the inefficacy of content moderation on social media platforms is scrutinized, particularly following Meta's decision to terminate third-party fact-checking. This move has sparked debate regarding whether content moderation is a legitimate protective tool for users or merely a means of censorship. Brough argues that the existing system fails to truly safeguard users while also serving as a distraction from the profit-driven motivations of social media companies. The legal underpinnings of content moderation are also highlighted as being tied to profit rather than user safety.
Read at New Scientist
Unable to calculate read time
[
|
]