Researchers are developing Artificial Intelligence to allow advisory ‘quarantining’ of hate speech, similar to malware filters, controlling exposure to ‘hateful content’.
The use of hate speech via social media could be tackled using the same ‘quarantine’ approach used to combat malicious software.
The definition of hate speech varies between nation, law and platform. Unfortunately, blocking keywords has proven ineffective. For example, if you were to describe graphic violence you are unlikely to use slurs or overtly hateful language.
Subsequently, hate speech is difficult to detect automatically. It has been reported that after the intended ‘psychological harm’ is inflicted, a slurry of moderators are required to judge each case.
An engineer and a linguist from the University of Cambridge, UK, have published a proposal in Ethics and Information Technology that harnesses cyber security techniques to give control to those targeted, without restoring censorship.
Cambridge language and machine learning experts are currently using databases of threats and violent language in order to built algorithms that can provide a score for the likelihood of an online message containing forms of hate speech.
“Hate speech is a form of intentional online harm, like malware, and can therefore be handled by means of quarantining,” said co-author and linguist Dr Stefanie Ullman. “In fact, a lot of hate speech is actually generated by software such as Twitter bots.”
“Companies like Facebook, Twitter and Google generally respond reactively to hate speech,” said co-author and engineer Dr Marcus Tomalin. “This may be okay for those who don’t encounter it often. For others it’s too little, too late.”
“Many women and people from minority groups in the public eye receive anonymous hate speech for daring to have an online presence. We are seeing this deter people from entering or continuing in public life, often those from groups in need of greater representation,” he said.