After the shocking incident where a gunman live-streamed an attack on New Zealand mosques back in March on Facebook, the social media giant appears to be taking some serious steps to prevent such incidents in the future.
Facebook’s automatic detection system was supposed to immediately flag the terrorist content and stop the live stream, but it miserably failed to do so. Facebook’s public policy director Neil Potts justified that the video’s perspective was the reason why it failed to trigger the AI systems.
“This was a first-person shooter video, one where we have someone using a GoPro helmet with a camera focused from their perspective of shooting. It was a type of video we had not seen before.”, he told British lawmakers.
In order to get training data to detect similar videos, Facebook will be supplying free body cameras to the U.K. Metropolitan Police. The Met will provide bodycam footages captured during training sessions in turn to Facebook that will serve as training data. “From October this year, the Met will provide Facebook with video footage of training by its Firearms Command, from the perspective of the officers, to help the company develop technology that identifies when someone is live streaming footage of a firearms attack.”, reads a blog post of UK Metropolitan Police.
“The technology Facebook is seeking to create could help identify firearms attacks in their early stages and potentially assist police across the world in their response to such incidents,” says Neil Basu, Met’s Assistant Commissioner for Specialist Operations.
Facebook also mentioned that the footage will help them avoid inaccurate detection of footage from movies and video games to be actual real-life violent content. The social media giant has reached out to U.S. law enforcement as well but there are no confirmations regarding the same from the U.S so far.
from Beebom https://ift.tt/2V8OWTC
0 comments:
Post a Comment