The fog of war: Revising the existing framework surrounding AI & counterterrorism policy in the E.U.
According to a 2014 report by the office of the United Nations High Commissioner for Human Rights (OHCHR), there has been a global proliferation in mass digital surveillance programs that are leveraging machine learning capabilities. As such, governments are quickly amassing data related to location and activity tracking, purchases, and social media activity. As a result, there is an increasing concern that the technological capabilities of artificial intelligence (AI) can be used to undermine individual privacy and basic freedoms under the guise of counterterrorism and security policy. This is concerning given that leading AI experts have warned governments about the vulnerabilities of machine learning which include exhibiting and magnifying intrinsic human biases, susceptibility in adversarial environments, and data limitations. A lack of adequate safeguards surrounding this technology can lead to infringements on basic human rights such as the right to privacy and freedom of expression and association.
Creative Destruction: Opportunities presented by AI
Predictive AI is increasingly used by governments to mitigate human bias in decision making and to minimize intrusion into the lives of citizens. As terrorist organizations increasingly digitize their communications, AI can be used to identify red flags from these interactions, such as the members’ degree of radicalization and patterns of terrorist movements. For example, intelligence agencies and security services in Europe are leveraging the power of AI to realize the predictive value of data. Law enforcement agencies in Germany are already using AI to devise predictive tools such as social network analyses of urban gangs and citywide alert systems. Since the potential entry costs to developing AI are being driven down by access to cheap computing power, technology companies are quickly developing software tools which allow intelligence and security services to inspect metadata from communications and probe into Internet connection records. Thus, sophisticated AI strategies can save E.U. members significant time and money by making predictions on behaviours in a range of different areas, a far better alternative to crude profiling.
Growing Stagnancy: The existing European regulatory landscape
The E.U.’s General Data Protection Regulation (GDPR), which was implemented in May 2018, is ineffective in regulating the use of AI in counterterrorism efforts. The GDPR primarily focused on creating a framework which protected consumer data by harmonizing the regulatory environment for international businesses in Europe. The new framework forced businesses to develop data protection policies, data protection impact assessments, and data processing standards.
However, the GDPR failed to hold governments equally accountable, as it provided statutory exemptions for the data collection practices of government agencies. Additionally, the GDPR framework does not provide an adequate form of redress to citizens who have been unfairly targeted by these technologies. This issue is also reflected in the patchwork of state-wide legislation. For example, the Investigatory Powers Tribunal (IPT), the body responsible for providing redress over the actions of government agencies in the U.K., has been criticized for being limited in scope and inaccessible to the general public. In a way, the IPT epitomizes the current AI and counterterrorism landscape as it makes a trade-off between transparency and operational security, much to the detriment of citizens’ liberty. Therefore, civil society groups argue that widespread digital surveillance could have a “chilling effect” on public engagement in sensitive political issues and civic activities, and the expression of dissent.
Charting a Course Forward: Building a robust and resilient regulatory framework
Given the fragmented nature of the existing landscape, the E.U. needs to develop a more effective framework surrounding AI and counterterrorism policy. A major incentive for European policymakers is that the well-regulated use of new AI capabilities can both enhance states’ abilities to protect citizens’ rights and freedoms – such as freedom from unfair discrimination – as well as engender transparency and accountability. AI, if deployed ethically, can be a powerful tool in combatting terrorism as it can analyze higher volumes of data and generate accurate predictions. These predictions can help countries allocate their limited resources to fight terrorism more efficiently.
The creation of a rigorous framework surrounding AI and counterterrorism policy should be characterized by three pillars which include: centralizing analysis capabilities under one regulatory regime; creating technical safeguards; and defining quantitative measures. Any proposed framework in the E.U. should require sovereign states to centralize their intelligence analysis capabilities and source data, gathered by government agencies, under the same regulatory regime. This centralization can make these activities easier to regulate and could improve the ability of these agencies to mobilize resources to effectively address threats. Similarly, rigorous technical safeguards, such as audit and access records, can prevent abuse of power by ensuring that only relevant and specific data is shared between intelligence and law enforcement agencies.
Governments should also release quantitative measures of the performance of AI-based models in order to establish trust within the public. These measures can address concerns regarding the discriminatory behaviour of machine learning tools. For example, in the U.K., individuals were advised that they would be subject to in-depth “stop and search” procedures based on a model that integrates information such as suspicious travel patterns and unusual payment methods. Travellers felt comfortable with the screening procedure after they were told that the AI screening system was looking at these search criteria. Disclosing these criteria also helped to alleviate concerns over racial and ethnic profiling. Ultimately, the goal of the program is to reduce the total number of people subjected to searches at London’s Heathrow Airport.
Dealing with Uncertainty
The deployment of AI for counterterrorism measures poses a wide array of ethical, legal, and regulatory challenges. The existing framework is inherently flawed, as it fails to curtail the ability of countries to engage in the indiscriminate targeting of citizens. However, these challenges should not dissuade efforts by the E.U. to harmonize the overall standards governing machine learning and encourage the development of effective counterterrorism tools.
An effective regulatory framework for the deployment of AI in counterterrorism measures must be rooted in transparency, accountability, and proportionality. The E.U. can seek inspiration from innovative companies such as Jigaw LLC, which developed a machine learning software that disrupts online radicalization and propaganda. The Jigsaw LLC redirects users of video-sharing sites, who are highly susceptible to terrorist propaganda, to videos espousing a credible counter-narrative. The E.U. should promote the development of a broad set of technological tools and a rigorous regulatory framework, with input from civil society groups and the private sector, which can ensure the deployment of AI in a manner that promotes transparency, accountability, and proportionality.