Censorship could push online extremists into the darker recesses of the web.
A British government Task Force on Tackling Radicalisation and Extremism, set up in the wake of the killing of Drummer Lee Rigby, has just released a series of recommendations that it hopes will help counter the tide of extremism in the UK.
Amongst these recommendations are plans to restrict access to extremist content online at the ISP level. The report states:
The government is hoping to counter the tide of online extremism
“Extremist propaganda is too widely available, particularly online, and has a direct impact on radicalising individuals…”
Despite declaring that they have already removed 18,000 items of online terrorist propaganda, the report states that the government can, and should, do more.
But existing research into online radicalisation seems to contradict the bold assertions made in the government report.
According to a report by the International Centre for the Study of Radicalisation (ICSR), “…the systematic, large-scale deployment of negative measures would be impractical, and even counterproductive: it would generate significant (and primarily political) costs whilst contributing little to the fight against violent extremism.”
Similarly, a report by the US-based Bipartisan Policy Center states “…Bipartisan Policy Center…” and “…the filtering of Internet content is impractical in a free and open society…”.
Filtering internet traffic, via domain names, full web page address or keywords, tends to have the effect of over-filtering, with websites and content that is not of concern also being affected. Over-blocking, in turn, can result in legal challenges and create political controversy if certain communities or groups feel they are being unfairly targeted. It also slows internet speeds down and is highly expensive if human resource teams, to go through affected content, are added.
Furthermore, either a government agency or a team within an ISP provider have to maintain a list of content and sites that are deemed worthy of being blocked.
In either case, there will be demand for such a list, or criteria, to become transparent and that can lead to a public debate about censored content. A public debate, in turn, could give unwanted exposure to otherwise fringe organisations or sites that gain kudos from their banned status. It could also push online extremists into the darker recesses of the web, where monitoring and countering online extremism is much more difficult.
Proponents of internet censorship also assume that there is a specific corner of the internet occupied by extremists and, thus, a specific number of static extremist websites that need to be shut down. In truth, extremists use social media, blogs, instant-messaging applications and video-sharing sites to share their messages.
These web tools also offer more interactivity and are more effective as means through which extremist content can be shared. Policing or removing content that is embedded in privately owned platforms, such as Twitter, Facebook or Reddit, is highly complex and labour-intensive to the point of being practically futile, since these platforms rely on and encourage user-generated content.
The presence of extremist content online offers an opportunity to engage with those who may be influenced by extremist narratives and promote counter-narratives. It also offers the opportunity to monitor conversational trends and tactics which, in turn, inform counter-extremism efforts. Restricting demand rather than cutting supply is the only way forward in a liberal society that cherishes freedom of speech, rather than mimicking totalitarian states that stifle all dissent.
Moving forward, we, at Quilliam, hope the government places more emphasis on supporting the work of counter-extremism practitioners rather than censoring the internet. We also hope that Britain, as a society, is not afraid to challenge and deconstruct extremist narratives and prefers to engage in the debate rather than shut it down.
Leave a Reply