What We Talked About At ISA: Cognitive Assemblages

HFT

What follows is the text of my presentation for a roundtable discussion on the use of assemblage thinking for International Relations at ISA in early April.


In this short presentation I want to try and demonstrate some of the qualities assemblage thinking brings with it, and I’ll attempt to do so by showing how it can develop the notion of epistemic communities. First, and most importantly, what I will call ‘cognitive assemblages’ builds on epistemic communities by emphasising the material means to produce, record, and distribute knowledge. I’ll focus on this aspect and try to show what this means for understanding knowledge production in world politics. From there, since this is a roundtable, I’ll try and raise some open questions that I think assemblage thinking highlights about the nature of agency. Third and finally, I want to raise another open question about how to develop assemblage theory and ask whether it remains parasitic on other discourses.

Throughout this, I’ll follow recent work on the concept and take ‘epistemic communities’ to mean more than simply a group of scientists.[1] Instead the term invokes any group that seeks to construct and transmit knowledge, and to influence politics (though not necessarily policy) via their expertise in knowledge. The value of this move is that it recognises the necessity of constructing knowledge in all areas of international politics – this process of producing knowledge isn’t limited solely to highly technical areas, but is instead utterly ubiquitous.

1 / Materiality

Constructivism has, of course, emphasised this more general process as well, highlighting the ways in which identities, norms, interests, and knowledge are a matter of psychological ideas and social forces. In Emanuel Adler’s exemplary words, knowledge for IR “means not only information that people carry in their heads, but also, and primarily, the intersubjective background or context of expectations, dispositions, and language that gives meaning to material reality”.[2] Knowledge here is both mental, inside the head, and social, distributed via communication. The problem with this formulation of what knowledge is, is that decades of research in science and technology studies, and in cognitive science, have shown this to be an impartial view of the nature of knowledge. Instead, knowledge is comprised of a heterogeneous set of materials, only a small portion of which are in fact identifiably ‘social’ or ‘in our heads’. It’s precisely this heterogeneity – and more specifically, the materiality of knowledge – that assemblage thinking focuses our attention on.

Knowledge is inseparable from measuring instruments, from data collection tools, from computer models and physical models, from archives, from databases and from all the material means we use to communicate research findings. In a rather persuasive article, Bruno Latour argues that what separates pre-scientific minds from scientific minds isn’t anything to do with a change inside of our heads.[3] There was no sudden advance in brainpower that made 17th century humans more scientific than 15th century humans, and as philosophy of science has shown, there’s no clear scientific method that we simply started to follow. Instead, Latour argues the shift was in the production and circulation of various new technologies which enabled our rather limited cognitive abilities to become more regimented and to see at a glance a much wider array of facts and theories. The printing press is the most obvious example here, but also the production of rationalised geometrical perspectives and new means of circulating knowledge – all of this contributed to the processes of standardisation, comparison, and categorisation that are essential to the scientific project. Therefore, what changed between the pre-scientific to the scientific was the materiality of knowledge, not our minds. And it’s assemblage thinking which focuses our attention on this aspect, emphasising that any social formation is always a collection of material and immaterial elements.

In this sense, questions about the divide between the material and the ideational can be recognised as false problems. The ideational is always material, and the constructivist is also a materialist.

GoogleDataCenter

2 / Economics and Climate Science

So what does this sharper focus on the materiality of knowledge get us?

I won’t go into generalities, but let me briefly outline two recent examples – one from economics and one from climate science – where I believe thinking in terms of cognitive assemblages can assist in explaining events.

The first case has to do with the transformation in the 1970s of UK macroeconometric modelling from a Keynesian framework to a monetarist framework.[4] Peter Kenway’s research shows that in the 1960s and early 1970s, the UK economic modelling scene was dominated by a particular Keynesian model which formed a paradigm for both research and government policy. With the crisis of stagflation in the 1970s though, the levers of government control over the economy weakened. The problem here was that the government response was to some degree hamstrung by the computer models they used to forecast the economy and test out policy options. It wasn’t until the late 1970s that a properly monetarist model was developed and capable of being put into use. As Kenway’s narrative shows, the innovations of this model were then quickly adopted by government largely because it included new variables that were modifiable by policy.[5]

The significant part here is that while individual economists were generating answers to the question of why stagflation was happening, and what could be done about it – it wasn’t until these theories were implemented into computer models, that the UK government could see and appraise the effects of monetarist policy proposals. Until then, the UK government remained largely bound to Keynesian mechanisms of government intervention, despite the failures of Keynesianism at the time. An explanation of the shift in government policy that only focused on the epistemic communities promoting monetarism would be incapable of giving a full explanation of the timing of the policy shift, and the delays in the shift despite the problems of stagflation.

The second example I want to briefly outline is of climate modelling. Since the earth’s climate system is far too complex for any mind – or even a collection of minds – to think about, all of our knowledge about it comes from computer modelling. Consequently, our knowledge of the effects of policy decisions is held in machines as well.

In the past two decades, one of the dominant trends in climate modelling has been a shift from the global to the local – increasingly modelling finer resolutions, and increasingly integrating elements of the geophysical system that are relevant to local areas – things like rivers, soil, and biological species. The consequence of this technological development in computing power is that local and long-term adaptation policies become viable. If one wants to know how to adapt rather than mitigate climate change, one needs to have an image of how climate change will affect the relevant area – and these images all come from computer models.

So while one can find statements from epistemic communities about the value of adaptation policies as early as the 1970s, it’s only in the past decade that the UK government has been able to seriously start making preparations for local and long-term adaptation. As with macroeconometric modelling, a focus on the materiality of knowledge helps in explaining the timing and shape of various policies.

From these two brief examples, I think we can draw out at least some initial conclusions. In the first case, while individuals continue to develop their fields, the technology employed by these cognitive assemblages has a momentum and stability to it that a purely social analysis of epistemic communities misses. Keynesian computer models continue on during a crisis of Keynes; and today we arguably see neoliberal computer models continuing on during a crisis of neoliberalism. The material aspect of knowledge here invokes a certain path dependency that limits options.

In the second example, we see technology producing new political options rather than restricting these options. The rise of seemingly viable adaptation policies stems not just from the desire for these policies, but also by technology making these policies possible in the first place.

In both cases, what is significant is not only the representational aspect of the models – whether they are true or not. Just as important is the affordances they offer to various political actors. New monetarist models proposed a way for the UK government to intervene in the economy and stop stagflation. New regional climate models provide the basis for intervening in the Earth system and adapting to climate change. The materiality of cognitive assemblages is significant for what they make possible.

3 / Questions

From this point, I want to conclude by raising a couple questions that I think assemblage theory opens up and highlights as critical.

The first question has to do with agency. While this is somewhat lost in the English translation, in Gilles Deleuze’s and Felix Guattari’s original French work, the term ‘assemblage’ has a strong connotation of agency as well. Their point – and I think they’re correct here – is that what is acting in any given situation is the entire assemblage. Agency becomes distributed in a complex way. This point is particularly significant as materialised cognition becomes increasingly ubiquitous. To give just one example, what does it mean when a surveillance algorithm mistakenly targets an innocent individual? Who is responsible? The individuals who carry out the arrest? The institution? The programmers of the algorithm? The company which sold the software? On a causal level, agency has to be attributed to the entire assemblage here – yet for political and ethical reasons this remains unsatisfying.

So a first open question that assemblage theory raises is how must our notions of agency and responsibility be transformed in order to take into account this reality?

Lastly, I want to raise a second open question – having to do with what it means to study assemblages.

In their original formulation by Deleuze, he will insist on the singular nature of assemblages. To speak of a general concept of assemblages is already to alter this original argument which stemmed from a critique of representational thought. If all assemblages are singular, then the question must be raised of how to draw out generalities from them? How to represent what Deleuze believes to be non-representable? The risk here is, on the one hand, that one attempts to fully respect the singular nature of each assemblage. Here it seems to me that one falls into a sort of Latourian methodology which believes that pure description is both possible and desirable. On the other hand, there’s also the risk that in the attempt to produce a general concept of assemblages, one empties the idea of assemblage out so much that it becomes epistemically derivative. Here one runs into empty claims about respecting becoming over being, the heterogeneous nature of every assemblage, and the ethical imperative to deterritorialise. While these points are arguably valid, the problem that arises is that assemblage thinking risks become a mere redescription of already well-defined phenomena. It becomes parasitic on other discourses – a problem which I think Manuel DeLanda’s work sometimes falls into.

So the final question here is how to study assemblages? How to chart a path between singular narratives and empty generalities, and demonstrate the added explanatory value of this concept?

References

[1] Mai’a K. Davis Cross, “Rethinking Epistemic Communities Twenty Years Later,” Review of International Studies 39, no. 1 (2013): 137–160.

[2] Emanuel Adler, “Communities of Practice in International Relations,” in Communitarian International Relations: The Epistemic Foundations of International Relations (London: Routledge, 2005), 4.

[3] Bruno Latour, “Visualization and Cognition: Drawing Things Together,” in Knowledge and Society: Studies in the Sociology of Culture Past and Present, ed. H. Kuklick (Jai Press, 1986), 1–32, http://www.bruno-latour.fr/sites/default/files/21-drawing-things-together-gb.pdf.

[4] Peter Kenway, From Keynesianism to Monetarism: The Evolution of UK Macroeconometric Models (London: Routledge, 1994).

[5] Ibid., 39.

12 thoughts on “What We Talked About At ISA: Cognitive Assemblages

  1. Query: what does the conceptual prosthetic “assemblage” get us that we don’t get from “discourse”? Or from a Norbert Elias-style “figuration”? Much of your discussion here and elsewhere also puts me in mind of Andrew Pickering’s arguments about the “mangle of practice”; is that also “assemblage thinking”?

    Like

    • For what it’s worth, I think Foucault’s original ‘discourse’ work is very much a type of assemblage thinking. Deleuze agrees too, and his little book on Foucault is all about how Foucault’s work is assemblage thinking. That being said, ‘discourse’ in contemporary IR (almost?) always refers solely to linguistic and semiotic analysis – which is important, but I think has to be kept distinct from how material forces function. Haven’t read Pickering’s book yet (it’s on my list!), but from what I can gather he sounds very similar.

      One possible problem though (one I have with Latour and a lot of ANT) is the reduction of everything to a series of actants. It’s too baggy of a concept, and I think there are important differences that get elided by just saying objects, people, texts, etc, are all actants.

      Like

  2. Pingback: Linkage Week in Review » Duck of Minerva

  3. I know its just a short intervention but there seems a jump here from cognitive/social having a material element to the material is primary. You say “The ideational is always material, and the constructivist is also a materialist.” And so far as that goes, sure – I’d think that was obvious (though I’m sure there are some who would object). But what doesn’t the cognitive/social drive the material? You say that it’s not as if people just got smarter between 15th and 17th centuries – they got better technology – but that better technology had to come from people in a sense getting smarter. So, I guess I worry when the talk of assemblage starts to sound like it’s flipping the script on the position you’re criticising rather than really escaping from the dichotomies like material/ideal, etc. And this relates to the questions you pose, well at least the fist one.

    So a computer misidentifies someone as a terror suspect and someone is arrested falsely, which you suggests presents a new challenge to ethical/political attributions of responsibility – because the causal responsibility is distributed, and at least party to material but non-conscious objects or structures. But I wouldn’t think of this as a new problem of responsibility at all but the same problem of responsibility returned. Causality has never been easily assignable in this way and holding responsible is at its core a political act, one which may require someone to be accountable for what they cannot control (at least not completely). It seems the intellectual difficulty of these kinds of questions is reduced when we accept that materiality (of a security/surveillance system, for example) is also defined by ideality/constructivist elements, in that its meaning and its agency is defined at least in part by what we think about it – do we assign culpability to the guy who writes the algorithm or the politician or the police officer? The ethical/political responsibility, and in some sense the causal responsibility (so far as we might accept that this to is a construction we have for our selves – in the vien of Suganami here) are determined by how we think of them. This is not, despite protests I already anticipate, suggesting a distinction between ideal/material as most real, but a worry that you’ve over played the material moment and, possibly, remaining tied to that dichotomy (which you of course may be quite comfortable with).

    Like

    • Joe, yes to your first point. I don’t think it does, but if this piece comes across as simply reversing the typical direction of determination (with materiality now determining sociality), that wasn’t my intention. The tension and the difficulty here is to maintain both (1) the distinction between technology/materiality and ideality/sociality, and (2) their necessary intermingling and reciprocal determination. I think much of ANT falls into the latter, without recognising the former. Whereas technological determinists and social constructivists tend to fall into the former without recognising the latter.

      Re: responsibility – agreed the assigning of responsibility (and implicitly, agency) is and always has been a political act. What seems to me somewhat new is first the greater complexity involved in tracing out responsibility in large technological systems (and the subsequent reliance to a greater degree on technical expertise). Secondly, technological systems aren’t reducible to the intentions behind them and have their own capacity to act. But despite their capacity to act, it seems entirely meaningless to assign responsibility to these technical components. Agency and decision-making are increasingly distributed into automated systems (ones which we increasingly don’t understand) – so how does this affect notions of responsibility? I don’t have an answer, but I think it’s an important question. I’m not suggesting there’s any sort of epochal break here either; it’s just a matter of these issues coming to greater prominence.

      I don’t quite get your suggestion though – yes, we have intuitive (and sometimes reasoned) conceptions about the meaning and agency involved in a sociotechnical system. But it sounds circular to say: “how do we assign responsibility to large sociotechnical systems? By looking at how we assign responsibility to large sociotechnical systems.” (The same problem with experimental philosophy trying to draw moral conclusions by asking people about their moral intuitions.) It doesn’t answer the question, it just outsources it to sociological processes.

      And hopefully we can avoid the realism/instrumentalism debate here!

      (And just a general point about the distinction between materiality/ideality – borrowing from Ray Brassier, a distinction is not a dualism, and philosophy has always proceeded by creating distinctions. It’s not something to be shirked out of some misguided fear of deconstruction. So I’m quite happy to draw a distinction between the material and the ideal.)

      Like

      • I figured as much on not wanting to overplay one side over the other – short interventions are what they are.

        On the example – the difference of what I have in mind from experimental philosophy’s use of moral intuitions (which I think is rubbish), is that I am suggesting that we have to study how and why we understand responsibility in the way we do (making intuitions explicitly and considering their underlying beliefs and material element as well), while also considering the way that understanding falls short (for example because of expanding systems), and then still make a choice and take action – decide how to think about responsibility in response to the limitations we confront, which isn’t going to be complete or unpolitical, but rather tries to overcome the problems we face as best we can. But in this, I would still see great continuity with past social/ethical experience and would be cautious identifying where the novelty is in our current condition. Are we really less able to understand the systems beyond our control? How do we know? Compared to what? etc. But that’s just my general scepticism of novelty – not because I think its impossible, rather that it very often gets over-sold.

        Like

      • Ya, I agree with most of that – just where you emphasise continuity, I emphasise at least some discontinuity. (Which is pretty standard for our debates!) I think there are significant ways in which large sociotechnical systems are beyond our control, but also the novelties emerging from automated and intelligent machines. I think diptherio’s example below of Nuremberg trials is the closest parallel, but even here it’s easier to follow a chain of command than it is to follow a machine-learning algorithm.

        Like

      • Hey Nick – actually I think diptherio’s comment clarifies a distinction I was struggling to make. I think I agree that there are new challenges with larger and more automated system so far as we want to understand the causal responsibility in those systems. Where I see continuity is in the need to assign responsibility as a social and political act that cannot be traced back on to clear causal responsibility, I think that leap that assigning responsibility requires stays much the same. Nuremberg, and international criminal law generally, is a good example of this – assigning command responsibility doesn’t really do a very good job of tracing causal responsibility and it is incredibly difficult to indict anyone for war crimes without making the leap over that gap, especially when we are trying leaders. It’s rare that responsibility can be traced back clearly, and I tend to think ever has it been – even though the problem gets more difficult and complicated as the social world develops. I think the gap has be maintained and we have to recognise that when we assign responsibility and dispense punishment there is always ambiguity, excess, sacrifice, violence – the key political question for me is, if that is part of what assigning responsibility is about, then who is it who is sacrificed? So long as it’s the poor, the weak, the individual criminal or solider, the tax payer or the benefit scrounger, etc and not those we power and control we’ve got a problem of responsibility that is as much political as it is causal – and perhaps primarily a political problem.

        Like

  4. Re: responsibility in (largely-automated) assemblages. I think the answer, in a nutshell, is to make responsibility system-wide, i.e. make every individual involved in the assemblage accountable in some way. In your example, for instance, I would hold the arresting officers accountable for not confirming the algorithim’s prediction before executing the arrest; the institutional desicion-makers for signing off on the policy of having people automatically arrested on the basis of an algorithmic output; the programmers and the company for not making sure that the non-technical bureaucrats understood the limitations of the software. If accountability is legally concentrated in one, or a few, individuals (or worse, in no particular individual), abuses will occur and mistakes will proliferate on the “it’s not my problem”/”that’s above my pay-grade” principal.

    We might make a connection here to the Nuremburg trials and their rejection of the “I was ordered by a superior” defense made by soldiers involved in “war crimes” (all war is a crime, but I digress…). The basic idea seems to be this: the system is one thing, but when it comes down to it, we all have a responsibility to act humanely, regardless of the dictates of the system. I would have this principal applied across all aspects of social life, not just behavior during times of war.

    ISTM that as our society has become more technologically advanced there has been concomitant, and largely undiscussed, move towards the (attempted) mathematization of human behavior as a matter of course. This goes back at least to the dawn of the industrial revolution and the advent of mechanized time keeping, along with it’s industrial application in the form of time-card devices that can record workers comings and goings down to the second. However, as behavioral economics has shown, human beings are not readily suseptible to mathematical description (much less prediction). We need to recognize this fact and ensure that there are human checks at every level of the mathematized systems that we rely on in modern life (and that, of course, implies humans who are individually responsible for checking that the outcomes generated by the system are humane). In short, we need to build some “human flex” into our systems.

    Unfortunately, the overall social dynamic, at the moment, seems to be away from both human accountability for the outcomes of our social assemblages, as well as away from allowing for “flex” in our systems. In the corporate world, human checks are expensive, so mathematizing and automating are seen as the “efficient” things to do (High Frequency Trading is an most obvious example, as is Wells Fargo’s perverse accounting system). On the government side, President Obama seems determined to make life hell for anyone who dares point out the abuses or failings of our beauracratic system while the Justice department declares HSBC ‘too big to jail’ and Wall Street con-artists (CEOs) claim ignorance about how their institutions operate, while lining their own pockets, etc., etc.

    Like

    • diptherio, yes, pretty much agreed on all accounts here. The difficult, of course, is actually implementing such a measure of responsibility into a complex system – here’s where politics rises up again. But it seems to me like something needs to change as these sorts of complex systems become ever more pervasive and influential. (Actually, thinking about it now, a better example than the surveillance algorithm might be things like flash crashes. In most cases, it’s incredibly difficult to determine what actions set off the trading algos into a spiral. Where does responsibility lie in such cases?)

      Like

      • Maybe we need a simple rule: if we can’t figure out how to work human, personal accountability into an automated system, then we don’t use that system. HFT would probably have to be banned altogether on that account, which might not be such a bad thing. A lot of the old-timey open-outcry guys would probably love it.

        Maybe the solution to some of these problems that result from new technologies is simply not to use the technologies…a luddite’s suggestion, I know.

        Like

Leave a comment