top of page

When the Intelligence Gets Artificial: Exploring the Complicated Honeymoon between Intelligence Agencies and AI

DI MARIANO VARESANO

16/12/2023

In a world where Artificial Intelligence (AI) seems to be taking hold in every field of human life, the work of Intelligence could not be an exception for long. However, the relationship between AI and Intel agencies, while full of potential, is also replete with problems and obstacles. This article aims to shed light on these complexities.

When looking at the work of Intel agencies throughout the last three decades, it is evident that the nature of this activity has changed drastically. One of the most evident changes has to do with the evolution of technology and its implications, as synthesized by Amy Zegart, an expert in this field, with the expression “the Five Mores”. The first “more” is “more threats”: with the reduction of the importance of geography brought about by cyberspace and its relatively low entry barriers, more nefarious actors have the capabilities of reaching more and more distant and high-stakes targets. Then there’s “more data”, and significantly so: the amount of data on Earth is doublingevery 24 months. The so-called “data smog” is indeed one of the main issues for Intel agencies, whose work is to seek and select useful data in a steadily growing ocean of information. This massive amount of data moves at higher and higher speed, requiring increasingly rapid decision-making capacities: this is the third “more”. The last two “mores” have to do with the increased number of actors, when it comes to both the decision-makers who need intelligence (now extended to big companies, especially in the tech sector), and more competition in the information-gathering process (e.g. in the field of Open Source Intelligence, OSInt in short). This new information environment eventually requires new instruments to be dealt with. This is where AI tools might assist Intel agencies, especially with the more repetitive tasks: “The less time that human analysts spend counting trucks on a bridge, the more time they will have to figure out what those tracks are doing and why”, Zegart adds.


To be sure, the inclusion of automated tasks in the work of intelligence does not come without problems and ethical risks: for example, machines carry the same bias of the dataset they are trained with, leading to potential discrimination against less represented groups of people. The Edward Snowden case, moreover, should be a reminder of the latent risk of mass surveillance by governments through technological means. Despite the relevance of this issue, it is not the focus of the current article. I will instead proceed with a focus on the various limitations that make the integration of AI tools in intelligence work more complicated. My main reference will be the U.S. intelligence community, not because it is the only place where these problems arise (the GCHQ in the UK and the Shin Bet in Israel have already included AI-driven tasks in their Intel activities, just to mention two other examples), but because the U.S. still maintains a technological edge over the rest of the world, making it an example that might be potentially extended to other cases in the future, at least in the Western world. The limitations I’ll explore are of four different kinds: ideological, legal, cultural, and physical.


The ideological barrier is the most delicate and complicated. It starts from one crucial assumption: as Moran, Burton, and Christou notice in their recent work, “U.S. intelligence community […] cannot develop AI projects strictly in-house, in the same way they may develop say a listening device or a secret camera. It simply does not possess the expertise.”. This makes the inclusion of experts from civil society necessary, and here comes the ideological problem: oftentimes, top AI programmers and developers are embedded in an anti-government and techno-libertarian set of ideas, which jeopardizes the secrecy of information required by a collaboration with an Intel agency. There are precedents of this: the CIA hacker Joshua Schulte stands accused of stealing 34 terabytes of sensitive data and passing them to WikiLeaks after resigning from its job in 2016 over concerns about the U.S. government’s use of new technologies. To quote again the paper already mentioned, “Intelligence leaders find themselves in a catch-22 situation. If they do not employ these people on security grounds, then their AI projects will stall. But, by employing them, they risk […] leakers and whistleblowers”.


The legal barriers between AI and governments are specific to the Western world and they consist of all the new legislation surrounding the use of citizens’ data by the State and private companies. The two most prominent examples are the AI Act in the European Union and the AI Bill of Rights in the U.S. These regulatory attempts generally aim to restrictaccess to data by third actors, with different levels of sensitivity and risk. While regulations on new technology are clearly needed and welcome, from a purely strategic point of view they represent a significant limit for information-gathering by Intel agencies in comparison with the free exploitation of personal data carried out by non-democratic States, notably China.


The cultural limits have to do with a counter-intuitive resistance in the Intel community to the adoption of new technology. As explained, again, by Amy Zegart, “many of these problems stem from an [intelligence community] culture that is resistant to change, reliant on traditional tradecraft and […] averse to […] acquiring and adopting new technologies”.


The last, and maybe less thought-of limit, is data storage. As Dawn Meyerriecks, a top CIA technologist, puts it, “We can’t just keep data forever and ever, kind of filling up our servers, right?”. Just to give a number, in 2017 an assault on a single Al Qaeda safe house in Afghanistan produced 40 terabytes of data. And the quantity of data increases over time: in the next two years, more data will be generated than over the entire period of human history.


The inclusion of AI tools in the activities of Intel agencies is not eventual: it is already happening. The challenge is twofold: avoiding the dystopian risks of automated discrimination and mass surveillance, while at the same time overcoming the ideological, legal, cultural, and physical barriers that make the integration hard. While a solution for these problems is out of the scope of this article, one observation is crucial: both challenges can only be dealt with if Intel agencies keep human reasoning, judgment, and responsibility at the center of their activity. The ideal scenario is one in which AI becomes one of the many tools humans have at their disposal to make more accurate decisions: human discretionality is not to be substituted by machines in any field, anytime soon.

Articoli Correlati

A(I)RTISTS

Down the deepfake hole

Understanding Cyberspace: The Main Characteristics of a New Warfare Domain

Understanding Cyberspace: The Main Characteristics of a New Warfare Domain

Understanding Cyberspace: The Main Characteristics of a New Warfare Domain

Down the deepfake hole

Down the deepfake hole
bottom of page