Reason Behind Failure Of Facebook Artificial Intelligence


New Zealand is still grieving the death of its fifty citizens who were killed in a live-shooting massacre.

A man in New Zealand live-streamed his attack on Facebook. He attacked one mosque and then another right after it on Friday while prayers were being conducted inside the two mosques. Right after what happened, Facebook, Google, and Twitter found themselves scourging online from preventing it from spreading. But as one footage was removed, a copy of it popped up somewhere.

This incident raises a lot of questions – from sharing everything on social media platforms to challenging the potential of artificial intelligence’s ability to flag such videos. The first report was made 12 minutes after the video had ended which was approximately half an hour later after the attack.

The social network has been heavily criticized for the failure of its AI to detect the live shootings or what has been described as an act of “terrorism”.

Guy Rosen, vice-president of product management, said that in order for AI to recognize something, it has to be trained. And for that, the system needs to be fed with images of nudity and terrorism in order for it to understand and learn what that is. But as events such as these are thankfully very rare, it is going to be very difficult.

AI is an integral part of our fight against terrorism content on social media platforms, and while it continues to improve, it is never going to be perfect.

Since the shooting at Christchurch, Facebook, Twitter, and Google has had to answer multiple on how to stop the spread of such videos. Initially, the video had reached less than 200 users but it had reached an audience of approximately 4000 before it was taken down.

Facebook aims to improve its matching technology figuring out how to get user reports faster, working further with the Global Internet Forum to Counter Terrorism.

Leave a Reply

Your email address will not be published. Required fields are marked *