
In response to the recent Christchurch terrorist attack, Facebook and other social media outlets are teaming up to prevent such shootings to be broadcasted on the platform’s live streaming feature.
Facebook is getting Metropolitan Police involved by using their first person cameras to train their AI algorithm to detect terroristic actions. The cameras will be used in their firearms training centers, so Facebook’s artificial intelligence will have the data and technology to detect violent behavior, and alert officers more quickly.
The Christchurch terror attack was the turning point in which social media outlets decided there needed to be a change within the systems, as 51 were left dead, and footage was viewed over 4,000 times before being removed. The delay was due to the AI not having enough first person footage of violent shootings to be able to detect it properly and quickly. With the implementation of training, Facebook will be able to automatically remove the videos and help aid officers in locating and responding to attacks. This will also diminish the glorification of such acts, considering the shooters oftentimes want to be recognized and people to talk about the videos.
Overall, Facebook, Instagram, and other social media sites are taking steps in the right direction to improve its safety features, and the initiative will begin in October. Personally, I think this is a great idea because it will no longer allow viewers to magnify the streams, regardless how horrifying they think they are. I chose this article because I think it is relevant to the current state of our country, and how media plays its’ role.
Violent acts certainly should be kept off the internet I’m glad to see platforms finally taking initiative. What other technologies do you think these platforms might invest in?
LikeLiked by 1 person