The terror in Christchurch, New Zealand that killed 50 Muslim worshipers joins a growing list of attacks that used the internet to spread the killer’s propaganda.
Had social networks and internet service providers adequately monitored the suspect’s postings, an arrest might have been made and the massacre averted. For example, if 8chan or Twitter had an adequate number of screeners or algorithms capable of identifying that his manifesto explicitly stated his plan to perpetrate terror, either of them could have immediately notified authorities. 8chan said it is responding to law enforcement and always complies with US law. Twitter encouraged users not to share videos of the attack, and asked users to report any videos or images from the attack.
The internet, while so important to modern life, can also be used to help perpetrate the greatest evils. Congress can help address the problem, and it’s time for lawmakers to act.
While the First Amendment protects free speech, true threats — like terrorist statements that express serious intent to commit violent acts against individuals and groups — are not constitutionally protected.
However, social platforms and ISPs have little legal incentive to regulate such communications on their platforms since they very rarely bare responsibility for what most users post or say. That’s because Section 230 of the Communications Decency Act of 1996 says companies that provide internet services, as well as platforms and websites, are immune from all but a few types of lawsuits, such as those involving child pornography and human trafficking.
Congress should draft a new law establishing standards for social media companies and ISPs to take down the terrorist content they’re aware of. Congress already has authority to act against organizations that knowingly provide resources to foreign terrorist organizations. A new law should prohibit similar coordination between digital companies and domestic terrorist groups. Passing detailed regulations would encourage companies to improve their monitoring technology and algorithms to be at least capable of recognizing true threats and incitements posted on their sites. Moreover, legislators should add penalties for companies that fail to do so, as it would be in keeping with Supreme Court decisions on the First Amendment.
Digital companies capable of editing out nudity and copyright infringements should be required to do the same for terrorists’ threats. A year ago, Congress passed the Fight Online Sex Trafficking Act, thereby ending immunity from liability for web services that sex traffickers use to do business. So too, should legislators enact a provision that says social media companies and ISPs are liable if they retain terrorists’ threatening posts.
This will bring US laws more in line with other democracies, such as France, where internet service providers are required to remove terrorist speech upon adequate notice from government authorities.
Federal standards need to be set in this area. Social media companies would thereby be better incentivized to police their websites for terrorist content and to more quickly act when they become aware of it.