On Tuesday, Facebook announced it would restrict users violating their most stringent policies from livestreaming for a certain time period – say 30 days – from the first offense, in its effort to limit the application’s use for causing harm or spreading hate.
Users breaking specific rules will be banned from using Facebook Live. Its new policy will include rules for dangerous individual and organizations.
Guy Rosen, Integrity Vice President Facebook, released in the press that the company intends to extend such restrictions to more areas in the next few weeks, starting with disabling those same persons from advertising through Facebook.
This announcement follows the mosque attack in New Zealand’s Christchurch, which led to 51 casualties, in March 2019. The shooter, an alleged white supremacist, belonging to Australia used Facebook Live to stream the attack, and the video was shared globally multiple times.
It remains to be confirmed if these new detection systems would actually have prevented the shooter from livestreaming, and Facebook provided no comment on this.
Facebook’s press release came just few hours before Jacinda Ardern Prime Minister New Zealand was to meet Emmanuel Macron Prime Minister France, to discuss the stoppage of digitally spreading extremist content.
Facebook has announced further that it will partner with 3 universities – Berkeley’s California University, Maryland University and Cornell University – for research to enhance technology that will analyze images and videos of terrorist content; detect manipulated media and distinguish between persons who unwittingly post and those who intentionally manipulate video content.
Facebook has brought these policy changes as its role in the spread of hate speech comes under deep scrutiny. Within a few weeks of the New Zealand attacks, John Edwards, the country’s Privacy Commissioner accused Facebook (through tweets deleted later) of being pathological liars that were morally bankrupt, and facilitated undermining of various democratic institutions.