Facebook to Use AI in Content Moderating
Earlier this week, Facebook agreed to pay $80 million dollars to content moderators whose jobs entailed surveilling harmful and offensive materials and removing them from the Facebook platform. However, by viewing disturbing scenes of rape, mass killings, racism, and executions, many staff members have complained of trauma suffered including Post Traumatic Stress Disorder.
Facebook has now said it is using artificial intelligence to detect posts that violate policies instead of humans. So how will this artificial intelligence affect humans in the future? Can humans and artificial intelligence even work together in harmonious balance?
Professor Deborah Richards from the Department of Computing at Macquarie University joined Sean on The Daily to discuss. She has participated in a number of projects regarding cyber security ethics, and artificial intelligence human behaviour change.