European regulators and tech giants Google, Microsoft, Twitter, and Facebook have struck an agreement to combat terrorism via a code of conduct that calls for the prompt removal of hate speech, the European Commission announced Tuesday.
Under the code of conduct, the tech companies agreed to remove any hate speech within a 24-hour period after receiving a valid notification of its existence and, if necessary, disabling access to the content. The speedy response is designed to prevent the potential of the material from going viral.
"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racists use to spread violence and hatred. This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected," Vĕra Jourová, EU Commissioner for Justice, Consumers, and Gender Equality, said in a statement.
The tech companies are not only aiming to remove terrorist propaganda but, in turn, plan to promote "independent counter-narratives, new ideas and initiatives," or what some people may consider a form of independent propaganda.
However, the European Digital Rights (EDRi) organization and Access Now opposed the code of conduct initiative, and issued a joint statement expressing their concerns and displeasure for not having a seat at the table when the code of conduct was being formalized.
"Faced with this lamentable outcome, and with no possibility to provide meaningful input into this process," the Commission has left us with no other choice but to withdraw from the discussion," Estelle Massé, EU policy analyst at Access Now, was quoted as saying in the statement.
Twitter held that the line between freedom of speech and hate speech is clear. "We remain committed to letting the Tweets flow," Karen White, Twitter's head of public policy for Europe, was quoted as saying in the Commission's statement. "However, there is a clear distinction between freedom of expression and conduct that incites violence and hate."
The agreement also calls for the tech giants to continue developing staff training and internal procedures to ensure that a majority of valid notifications of hate speech are reviewed, and that the companies will strengthen their partnerships with organizations that alert them to terrorist or hate-related content posted onto their respective social media or content sites.
The companies are all already required to have rules such as terms of service or community guidelines in place that prohibit the "incitement to violence and hateful conduct," according to the EU announcement.
John Frank, vice president for EU Government Affairs at Microsoft, noted in a statement that his company recently announced "additional steps to specifically prohibit the posting of terrorist content."
Any of the tech companies' users, organizations that monitor terrorists, or law enforcement of the member states who notice violations to the rules or community guidelines have to use that company's online notification system to alert the company of the suspected violation. The company will then look at the content that is allegedly in violation and compare it with its rules, community guidelines and, if applicable, any national laws falling under the EU's Council Framework Decision.
Under the Council Framework Decision, hate speech is defined as:
- public incitement to violence or hatred directed against a group of persons or a member of such a group defined on the basis of race, colour, descent, religion or belief, or national or ethnic origin;
- the above-mentioned offence when carried out by the public dissemination or distribution of tracts, pictures or other material;
- publicly condoning, denying or grossly trivialising crimes of genocide, crimes against humanity and war crimes as defined in the Statute of the International Criminal Court (Articles 6, 7 and 8) and crimes defined in Article 6 of the Charter of the International Military Tribunal, when the conduct is carried out in a manner likely to incite violence or hatred against such a group or a member of such a group.
Tech giants will also work with law enforcement agencies in the member states to help them learn to familiarize themselves with recognizing hate speech online and with the proper ways to notify the companies in order that they remove or disable the content.
[Read Encryption Debate: 8 Things CIOs Should Know.]
Although Germany has had agreements with Facebook, Twitter, and Google to remove hate speech from their sites within a 24-hour period of notification, according to a report in The Verge, the European Commission code of conduct broadens this action to other member states.
Interestingly, the code of conduct announcement comes after a recent decision by Twitter to block US intelligence agencies from accessing its Dataminr. The service uses Twitter's data feeds to reveal political unrest and terrorist attacks in real-time and sells them to intelligence agencies. Twitter said it was taking the steps to protect users' privacy against inquiries from intelligence agencies.