(MENAFN- The Conversation) In the wake of an explosion in London on September 15, to the internet.
Racists and , and many , have used the internet for decades and , shifting from text-only discussion forums to elaborate and , and even .
Our research has examined various online communities populated by radical and extremist groups. And two of us were on the team that created the , an open-source database helping scholars better understand the criminal behaviors of jihadi, far-right and far-left extremists. Analysis of that data demonstrates that appears to help hate groups stay active over time. (One of the oldest far-right group forums, Stormfront, has been online in some form .)
But recent efforts to will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups and movements.
Fighting an impossible battle
Like regular individuals and corporations, extremist groups use social media and the internet. But there have been few concerted efforts to eliminate their presence from online spaces. For years, Cloudflare, a company that provides technical services and , has for far-right groups and jihadists, withstanding .
The company refused to act until a few days after the violence in Charlottesville. As outrage built around the events and groups involved, pressure mounted on companies providing internet services to the Daily Stormer, a major hate site whose members helped organize the demonstrations that turned fatal. As other service providers , that he 'woke up … in a bad mood and decided to .'
It may seem like a good first step to limit hate groups' online activity – thereby keeping potential supporters from learning about them and deciding to participate. And a company's decision may demonstrate to other customers its willingness to take hard stances against hate speech.
But that decision can cause problems: Prince criticized his own role, saying, ' to decide who should and shouldn't be able to be online. And he made clear that the move was .
Further, as a sheer practical matter, the distributed global nature of the internet means no group can be kept offline entirely. All manner of extremist groups have online operations – and despite , they are still able to recruit people to far-right groups and the jihadist movement. Even the Daily Stormer itself has managed to remain online after being booted from the mainstream internet, .
Efforts to knock extremists offline may also have counterproductive results, helping the targeted groups recruit and radicalize new members. The fact that their websites have been taken down can become for those who are blocked or removed. For instance, Twitter users affiliated with IS who were blocked or banned at one point are often able to and use their experience as a demonstration of their commitment.
When a particular site is under fire, people who hold similar beliefs may be drawn to support the group, finding themselves motivated by a perceived opportunity to express views that are opposed by socially powerful companies or organization. In fact, have found that some extremist groups actively seek out harsh penalties from criminal justice agencies and governments, in an effort to exploit perceived overreactions for a public relations advantage that also aids their recruitment efforts.
Relations between tech companies and police
Internet companies' decisions about online expression also affect the difficult relationship between the technology industry and law enforcement. There are, for example, between web hosting providers and police investigating child pornography or other crimes. But policies and practices vary widely and can depend on the circumstances of the crime or the nature of the police request.
For example, retrieve information from an iPhone used by a man who shot 14 people in San Bernardino, California, in 2015. The company said it wanted to avoid setting a precedent that could put its customers at risk of intrusive or unfair investigations . And Apple has since for data stored on its devices.
All of this suggests the tech industry, law enforcement and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. Tech companies may intend to be creating a safer and more inclusive environment for users – but they may actually encourage radicalization and simultaneously create precedents for removing content in the face of public outcry, regardless of legal or moral obligations.
To date, these concerns have arisen suddenly and briefly only in the wake of specific events, like 9/11 or Charlottesville. And while opponents may shut down one or more hate sites, the site will likely pop back up elsewhere, maybe even stronger. The only way to really eliminate this kind of online content is to .
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.