Source: Donna Anderson

The EDRi (Euorpean Digital Rights) website posted a leaked document from the Clean IT project that shows the European group is veering a bit off course from its original aim of establishing voluntary self-regulatory measures to protect the Internet from terrorists. Instead of identifying specific problems to be solved, the Clean IT project has become “little more than a protection racket.

The Clean IT project is funded by the Prevention of and Fight against Crime Programme of the European Commission, and supported by Germany, Spain, the United Kingdom, the Netherlands and Belgium. Believing that partnerships between public and private organizations can be more effective than government involvement, the main objective of the project is “to develop a non-legislative ‘framework’ that consists of general principles and best practices.”

EDRi states that the initial meetings of the project members were directionless and ill-informed discussions about doing “something” to solve unidentified online “terrorist” problems and that they were mainly attended by filtering companies who saw them as a business opportunity. In the end, says, EDRi, “Their work has paid off, with numerous proposals for filtering by companies and governments, proposals for liability in case sufficiently intrusive filtering is not used, and calls for increased funding by governments of new filtering technologies.”

In other words, we can’t specifically define the problem, which makes it even harder to come up with a solution. These guys over here say filters will take care of it, so let’s just filter the entire Internet and call it a day.

In an April 2012 letter from the Clean IT Project Manager to the Bits of Freedom blog, the coordinator reiterates that the goal of the project it to first identify problems and then enter into an open discussion with the private and public sectors and cooperate to come up with solutions. “This project will only present solutions when there is consensus between public and private parties about both the problem and the solution.”

The group proposes that Internet companies use stricter terms of service agreements to ban unwelcome activity, but advise that these “should not be very detailed”. They cite the Microsoft Code of Conduct as an example, which includes the line, “You will not upload, post, transmit, transfer, distribute or facilitate distribution of any content which depicts nudity of any sort including full or partial human nudity or nudity in non-human forms such as cartoons, fantasy art or manga.”

Under that agreement a picture of Donald Duck wouldn’t be allowed because the poor little guy is never wearing any pants. But who’s going to ban a picture of Donald Duck? The statement is just ambiguous enough to allow the Powers That Be the option of censoring whenever they feel like censoring.

In other words, says EDRi, “If Donald Duck is displeasing to the police, they would welcome, but don’t explicitly demand, ISPs banning his behavior in their terms of service.” And, as you’ll see below, one of the recommendations in the Clean IT initiative states, “Governments should use the helpfulness of ISPs as a criterion for awarding public contracts.”

The Clean IT Project calls for binding agreements from Internet companies to carry out surveillance, to block and to filter. It also wants to create a network of trusted online informants and they’re even calling for stricter legislation from member states, even though their original intention was to cooperate on a public and private level and keep the government out of it.

EDRi says the document was distributed to participants on a “need to know” basis and they’re sharing it because they believe citizens need to know what’s being proposed. The key measures include:

  • – Removal of any legislation preventing filtering/surveillance of employees’ Internet connections
  • – Law enforcement authorities should be able to have content removed “without following the more labor-intensive and formal procedures for ‘notice and action’”
  • – “Knowingly” providing links to “terrorist content” (the draft does not refer to content which has been ruled to be illegal by a court, but undefined “terrorist content” in general) will be an offense “just like” the terrorist
  • – Legal underpinning of “real name” rules to prevent anonymous use of online services
  • – ISPs to be held liable for not making “reasonable” efforts to use technological surveillance to identify (undefined) “terrorist” use of the Internet
  • – Companies providing end-user filtering systems and their customers should be liable for failing to report “illegal” activity identified by the filter
  • – Customers should also be held liable for “knowingly” sending a report of content which is not illegal
  • – Governments should use the helpfulness of ISPs as a criterion for awarding public contracts
  • – Blocking or “warning” systems should be implemented by social media platforms – somehow it will be both illegal to provide (undefined) “Internet services” to “terrorist persons” and legal to knowingly provide access to illegal content, while “warning” the end-user that they are accessing illegal content
  • – The anonymity of individuals reporting (possibly) illegal content must be preserved… yet their IP address must be logged to permit them to be prosecuted if it is suspected that they are reporting legal content deliberately and to permit reliable informants’ reports to be processed more quickly
  • – Companies should implement upload filters to monitor uploaded content to make sure that content that is removed – or content that is similar to what is removed – is not re-uploaded
  • – Flaggina/report button systems must be implemented.
  • – Users must be provided a way to flag/report terrorism and radicalizing content.
  • – Providers of chat boxes, e-mail services, messaging systems, social networks, retailing sites, voice over Internet protocol and web forums must have flagging systems.
  • – Hosted websites must have an easily visible abuse reporting email address or contact form.

In a separate section of the Clean IT initiative, titled “Government Policies” there are several points for discussion, including:

  1. – Governments must have intelligence agencies monitor terrorist use of the Internet, but only monitor specific threats, not primarily the population as a whole and all Internet use;
  2. – Governments must have clear policies on intelligence gathering and when to take action, against terrorist or radicalizing content on the Internet;
  3. – Governments must have specialized police officers ‘patrol’ on social media;

EDRi says “Unsurprisingly, in discussions with both law enforcement agencies and industry about Clean IT, the word that appears with most frequency is ‘incompetence’,” but they don’t say what’s being referred to. The incompetency of the Clean IT Project members at identifying real problems and solutions? Or Is the Clean IT group assuming they need to step in with filters and surveillance equipment because Internet businesses are too incompetent to take care of their own security?

Donna Anderson writes for