FACEBOOK IS HIRING A SMALL ARMY TO BLOCK MURDER AND SUICIDE VIDEOS

This eye-opening article from Vanity Fair examines some hefty social issues Facebook will have to tackle as they cook up new ways to garner user-generated content.  Can they hire a large enough army to effectively remove the offending content?  The bigger question may be: Are Facebook’s mental health benefits adequate to support employees hired to take on the task of reviewing such a large volume of violent content?

Can thousands of content moderators solve Facebook’s live-video problem?

This article has been re-shared from it’s original source, VanityFair.com

 

A year after announcing the launch of its live-streaming feature, Facebook has offered a tacit admission that Facebook Live has a problem. While publishers continue to use it to broadcast original content, Facebook Live has also recently been used to broadcast suicides and violent crimes. Last month, a man in Cleveland led police on a nationwide manhunt after uploading a video to Facebook of the killing of an elderly man.

On Wednesday, Facebook C.E.O. Mark Zuckerberg announced a solution of sorts: hiring 3,000 more people to sift through videos, keeping the worst ones off Facebook. “Over the last few weeks, we’ve seen people hurting themselves and others on Facebook—either live or in video posted later,” Zuckerberg wrote in an online post. “It’s heartbreaking, and I’ve been reflecting on how we can do better for our community.” Facebook is almost doubling its community operations team, which currently has 4,500 people, with plans to build new tools so Facebook users can help remove offensive videos. Zuckerberg’s announcement comes just ahead of the company’s first-quarter earnings call, during which Facebook officials will almost certainly be asked by investors what they are doing to eliminate bad content from its platform and prevent future P.R. disasters.

Policing violent or hateful content has become a central concern for the tech industry as the Internet ecosystem evolves and consolidates, reducing the number of primary Internet gatekeepers to a handful of giant public companies. Twitter has begun using artificial intelligence to identify accounts engaged in abusive behavior, and Google parent company Alphabet recently released an A.P.I. that uses machine learning to determine whether online comments constitute harassment. In the meantime, however, there are an estimated 100,000 content moderators like the ones Facebook plans to hire working for Web sites and apps around the world. As Wired reports, most make very little money and often receive no benefits for watching and removing brutal videos and disturbing images from the Web. Earlier this year, two Microsoft employees sued the company, alleging that Microsoft was negligent in providing mental-health care to employees on its Online Safety Team, which reviewed and removed graphic content.