An investigative group reported that Facebook approved explicit and violent ads against the Rohingya, raising alarms over the social media platform’s negligence in regulating contents.
Global Witness said they submitted eight real advertisements to Facebook, all falling under Facebook’s criteria for hate speech. Global Witness reported that the social media platform agreed to publish the contents despite the disturbing and offensive nature of the contents.
Meta, Facebook’s new name, defined hate speech as “direct attack against people—rather than concepts or institutions—on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease.”
Meta reported in its 2021 Fourth Quarter Community Standards Enforcement Report that the platform has taken actions on harmful content more quickly through Artificial Intelligence technologies.
“Our product design teams play an important role in our effort to reduce harmful content. By carefully designing our social media products, we can promote greater safety while providing people with context, control, and the room to share their voice,” wrote Meta Vice President of Integrity Guy Rosen.
In 2021, Rohingya refugees sued Facebook over the platform’s negligence to prevent the spread of hate speech, allegedly promoting genocide against the Rohingya.
Global Witness urged Facebook to publish integrity and security systems to make sure that people across all countries are protected from hate speech and violence online.
© Fourth Estate
® — All Rights Reserved.
This material may not be published, broadcast, rewritten or redistributed.