What follows is the text of my presentation for a roundtable discussion on the use of assemblage thinking for International Relations at ISA in early April.
In this short presentation I want to try and demonstrate some of the qualities assemblage thinking brings with it, and I’ll attempt to do so by showing how it can develop the notion of epistemic communities. First, and most importantly, what I will call ‘cognitive assemblages’ builds on epistemic communities by emphasising the material means to produce, record, and distribute knowledge. I’ll focus on this aspect and try to show what this means for understanding knowledge production in world politics. From there, since this is a roundtable, I’ll try and raise some open questions that I think assemblage thinking highlights about the nature of agency. Third and finally, I want to raise another open question about how to develop assemblage theory and ask whether it remains parasitic on other discourses.
Throughout this, I’ll follow recent work on the concept and take ‘epistemic communities’ to mean more than simply a group of scientists. Instead the term invokes any group that seeks to construct and transmit knowledge, and to influence politics (though not necessarily policy) via their expertise in knowledge. The value of this move is that it recognises the necessity of constructing knowledge in all areas of international politics – this process of producing knowledge isn’t limited solely to highly technical areas, but is instead utterly ubiquitous.
1 / Materiality
Constructivism has, of course, emphasised this more general process as well, highlighting the ways in which identities, norms, interests, and knowledge are a matter of psychological ideas and social forces. In Emanuel Adler’s exemplary words, knowledge for IR “means not only information that people carry in their heads, but also, and primarily, the intersubjective background or context of expectations, dispositions, and language that gives meaning to material reality”. Knowledge here is both mental, inside the head, and social, distributed via communication. The problem with this formulation of what knowledge is, is that decades of research in science and technology studies, and in cognitive science, have shown this to be an impartial view of the nature of knowledge. Instead, knowledge is comprised of a heterogeneous set of materials, only a small portion of which are in fact identifiably ‘social’ or ‘in our heads’. It’s precisely this heterogeneity – and more specifically, the materiality of knowledge – that assemblage thinking focuses our attention on.
Knowledge is inseparable from measuring instruments, from data collection tools, from computer models and physical models, from archives, from databases and from all the material means we use to communicate research findings. In a rather persuasive article, Bruno Latour argues that what separates pre-scientific minds from scientific minds isn’t anything to do with a change inside of our heads. There was no sudden advance in brainpower that made 17th century humans more scientific than 15th century humans, and as philosophy of science has shown, there’s no clear scientific method that we simply started to follow. Instead, Latour argues the shift was in the production and circulation of various new technologies which enabled our rather limited cognitive abilities to become more regimented and to see at a glance a much wider array of facts and theories. The printing press is the most obvious example here, but also the production of rationalised geometrical perspectives and new means of circulating knowledge – all of this contributed to the processes of standardisation, comparison, and categorisation that are essential to the scientific project. Therefore, what changed between the pre-scientific to the scientific was the materiality of knowledge, not our minds. And it’s assemblage thinking which focuses our attention on this aspect, emphasising that any social formation is always a collection of material and immaterial elements.
In this sense, questions about the divide between the material and the ideational can be recognised as false problems. The ideational is always material, and the constructivist is also a materialist.
2 / Economics and Climate Science
So what does this sharper focus on the materiality of knowledge get us?
I won’t go into generalities, but let me briefly outline two recent examples – one from economics and one from climate science – where I believe thinking in terms of cognitive assemblages can assist in explaining events.
The first case has to do with the transformation in the 1970s of UK macroeconometric modelling from a Keynesian framework to a monetarist framework. Peter Kenway’s research shows that in the 1960s and early 1970s, the UK economic modelling scene was dominated by a particular Keynesian model which formed a paradigm for both research and government policy. With the crisis of stagflation in the 1970s though, the levers of government control over the economy weakened. The problem here was that the government response was to some degree hamstrung by the computer models they used to forecast the economy and test out policy options. It wasn’t until the late 1970s that a properly monetarist model was developed and capable of being put into use. As Kenway’s narrative shows, the innovations of this model were then quickly adopted by government largely because it included new variables that were modifiable by policy.
The significant part here is that while individual economists were generating answers to the question of why stagflation was happening, and what could be done about it – it wasn’t until these theories were implemented into computer models, that the UK government could see and appraise the effects of monetarist policy proposals. Until then, the UK government remained largely bound to Keynesian mechanisms of government intervention, despite the failures of Keynesianism at the time. An explanation of the shift in government policy that only focused on the epistemic communities promoting monetarism would be incapable of giving a full explanation of the timing of the policy shift, and the delays in the shift despite the problems of stagflation.
The second example I want to briefly outline is of climate modelling. Since the earth’s climate system is far too complex for any mind – or even a collection of minds – to think about, all of our knowledge about it comes from computer modelling. Consequently, our knowledge of the effects of policy decisions is held in machines as well.
In the past two decades, one of the dominant trends in climate modelling has been a shift from the global to the local – increasingly modelling finer resolutions, and increasingly integrating elements of the geophysical system that are relevant to local areas – things like rivers, soil, and biological species. The consequence of this technological development in computing power is that local and long-term adaptation policies become viable. If one wants to know how to adapt rather than mitigate climate change, one needs to have an image of how climate change will affect the relevant area – and these images all come from computer models.
So while one can find statements from epistemic communities about the value of adaptation policies as early as the 1970s, it’s only in the past decade that the UK government has been able to seriously start making preparations for local and long-term adaptation. As with macroeconometric modelling, a focus on the materiality of knowledge helps in explaining the timing and shape of various policies.
From these two brief examples, I think we can draw out at least some initial conclusions. In the first case, while individuals continue to develop their fields, the technology employed by these cognitive assemblages has a momentum and stability to it that a purely social analysis of epistemic communities misses. Keynesian computer models continue on during a crisis of Keynes; and today we arguably see neoliberal computer models continuing on during a crisis of neoliberalism. The material aspect of knowledge here invokes a certain path dependency that limits options.
In the second example, we see technology producing new political options rather than restricting these options. The rise of seemingly viable adaptation policies stems not just from the desire for these policies, but also by technology making these policies possible in the first place.
In both cases, what is significant is not only the representational aspect of the models – whether they are true or not. Just as important is the affordances they offer to various political actors. New monetarist models proposed a way for the UK government to intervene in the economy and stop stagflation. New regional climate models provide the basis for intervening in the Earth system and adapting to climate change. The materiality of cognitive assemblages is significant for what they make possible.
3 / Questions
From this point, I want to conclude by raising a couple questions that I think assemblage theory opens up and highlights as critical.
The first question has to do with agency. While this is somewhat lost in the English translation, in Gilles Deleuze’s and Felix Guattari’s original French work, the term ‘assemblage’ has a strong connotation of agency as well. Their point – and I think they’re correct here – is that what is acting in any given situation is the entire assemblage. Agency becomes distributed in a complex way. This point is particularly significant as materialised cognition becomes increasingly ubiquitous. To give just one example, what does it mean when a surveillance algorithm mistakenly targets an innocent individual? Who is responsible? The individuals who carry out the arrest? The institution? The programmers of the algorithm? The company which sold the software? On a causal level, agency has to be attributed to the entire assemblage here – yet for political and ethical reasons this remains unsatisfying.
So a first open question that assemblage theory raises is how must our notions of agency and responsibility be transformed in order to take into account this reality?
Lastly, I want to raise a second open question – having to do with what it means to study assemblages.
In their original formulation by Deleuze, he will insist on the singular nature of assemblages. To speak of a general concept of assemblages is already to alter this original argument which stemmed from a critique of representational thought. If all assemblages are singular, then the question must be raised of how to draw out generalities from them? How to represent what Deleuze believes to be non-representable? The risk here is, on the one hand, that one attempts to fully respect the singular nature of each assemblage. Here it seems to me that one falls into a sort of Latourian methodology which believes that pure description is both possible and desirable. On the other hand, there’s also the risk that in the attempt to produce a general concept of assemblages, one empties the idea of assemblage out so much that it becomes epistemically derivative. Here one runs into empty claims about respecting becoming over being, the heterogeneous nature of every assemblage, and the ethical imperative to deterritorialise. While these points are arguably valid, the problem that arises is that assemblage thinking risks become a mere redescription of already well-defined phenomena. It becomes parasitic on other discourses – a problem which I think Manuel DeLanda’s work sometimes falls into.
So the final question here is how to study assemblages? How to chart a path between singular narratives and empty generalities, and demonstrate the added explanatory value of this concept?
 Mai’a K. Davis Cross, “Rethinking Epistemic Communities Twenty Years Later,” Review of International Studies 39, no. 1 (2013): 137–160.
 Emanuel Adler, “Communities of Practice in International Relations,” in Communitarian International Relations: The Epistemic Foundations of International Relations (London: Routledge, 2005), 4.
 Bruno Latour, “Visualization and Cognition: Drawing Things Together,” in Knowledge and Society: Studies in the Sociology of Culture Past and Present, ed. H. Kuklick (Jai Press, 1986), 1–32, http://www.bruno-latour.fr/sites/default/files/21-drawing-things-together-gb.pdf.
 Peter Kenway, From Keynesianism to Monetarism: The Evolution of UK Macroeconometric Models (London: Routledge, 1994).
 Ibid., 39.