Accelerationism Without Accelerationism

The second post in our forum on Nick and Alex Williams’ Inventing the Future, from Steven Shaviro. Steven is the DeRoy Professor of English at Wayne State University. He blogs at The Pinocchio Theory.


The term accelerationism was coined by Benjamin Noys in 2010, in order to designate a political position that he rejected. In Noys’ account, accelerationism is the idea that things have to get worse before they can get better. The only way out of capitalism is the way through. The more abstract, violent, inhuman, contradictory, and destructive capitalism becomes, the closer it gets to tearing itself apart. Such a vision derives, ultimately, from the famous account of capitalism’s inherent dynamism in the Communist Manifesto. For Marx and Engels, capitalism is characterized by “constant revolutionising of production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation… All that is solid melts into air, all that is holy is profaned.” Far from deploring such developments, Marx and Engels see them as necessary preconditions for the overthrow of capitalism itself.

The trouble with accelerationism, according to Noys, is that it celebrates “uncertainty and agitation” as revolutionary in its own right. It doesn’t have any vision of a future beyond disruption. In the 1970s, Deleuze and Guattari suggest that we need, not to withdraw from capitalism, but “to go still further… in the movement of the market, of decoding and deterritorialization,” At the same time, Jean-François Lyotard exults over capitalism’s “insane pulsions” and “mutant intensities.‟ By the 1990s, Nick Land ecstatically anticipates the dissolution of humanity, as the result of “an invasion from the future” by the “cyberpositively escalating technovirus” of finance capital. Today, transhumanists see Bitcoin, derivatives, algorithmic trading, and artificial intelligence as tools for destroying the social order altogether, and for freeing themselves from the limits of the State, of collectivity, and even of mortality and finitude. This is what happens when “creative destruction” – as Joseph Schumpeter calls it, in his right-wing appropriation of Marx – is valued in and of itself.

In 2013, responding to all these currents, Nick Srnicek and Alex Williams published their “#Accelerate: Manifesto for an Accelerationist Politics.” In this text, they seek to reclaim accelerationism as a genuine project for the left – one that can pick up the tools of capitalist modernity, and detourn them to liberatory ends. This is not a matter of celebrating disruption for its own sake; Srnicek and Williams emphatically reject Nick Land’s “myopic yet hypnotising belief that capitalist speed alone could generate a global transition towards unparalleled technological singularity.” Instead, Srnicek and Williams return to Marx’s own suggestion that Continue reading

Modelling Worlds: The Politics of Simulation

A guest post from Nathan Coombs who is an incoming Leverhulme Early Career Research Fellow at the University of Edinburgh. He edits the Journal of Critical Globalisation Studies, and is the author of the forthcoming book, Politics of the Event: From Marxism to Contemporary French Theory (Edinburgh University Press, 2015). His current research interests are in financial algorithms and financial regulation. He can be contacted at n.coombs (at) fastmail.co.uk


 

hft

Over the last decade, scholars have become increasingly interested in what we do when we make use of models and simulations. An emerging consensus – often legitimated through reference to Bruno Latour’s Actor-Network Theory – is that mathematical models and computer simulations are not passive tools but rather a material force in their own right. Agents may employ such technologies in order to achieve pre-determined ends, but the technologies themselves have an effectivity that exceeds their users’ intentions, and set in place path-dependencies that serve to proscribe the range of political and economic possibility.

This concern with the politics of technology cuts across multiple disciplines including Sociology, Communication Studies, International Relations, International Political Economy, and Management Studies. However, the Social Studies of Finance (SSF) has perhaps gone furthest in exploring the practical implications of modelling and simulation technologies. Applying Austinian and Barnesian notions of performativity, researchers in this field have sought to grasp the way in which economic models shape markets, and to dig into the mathematical and technical details that underpin this process.

Donald MacKenzie’s book An Engine, Not a Camera (2008) is exemplary of this approach, and a common point of reference for scholars in SSF and all the aforementioned disciplines. In his analysis of the development and uptake of the Black-Scholes option-pricing model in the 1970s, MacKenzie aims to show how the model’s employment of the efficient market hypothesis – where stock prices are considered to accurately reflect their risk – led to a period in which the pricing of options came to reflect that predicted by the model. The point of MacKenzie’s analysis is not to endorse the neoclassical economic assumptions codified in the model. Rather, it is to point out how models serve to socially facilitate evaluation practices in the face of complexity, uncertainty, and epistemological opacity. On this basis a model can also contribute to financial instability when it is both widely employed and based on assumptions that are confounded by ‘real world’ contingencies.

Continue reading

Who’s Afraid Of Economic Models?

A guest post by Nathan Coombs, a doctoral student in Politics and International Relations at Royal Holloway. Nathan’s work focuses on the relationship between metaphysics and political ideology. Nathan is the author of ‘The Political Theology of Red Toryism’, published in the Journal of Political Ideologies, 16(1), February 2011, as well as a number of other papers. He is an also an Editor of the Journal of Critical Globalisation Studies, an open-access peer-reviewed academic journal which should be a stimulus to us all. Images by Pablo.


If there is a point of unity for strong (non-Keynesian) critics of neoclassical economics it is their shared rejection of modelling. This is not to say such authors shun all use of abstraction, idealisation, and quantisation in favour of just qualitative, empirical efforts at explanation. Rather, modelling is held out as a practice whereby mathematical attempts to grasp economic laws become unhinged from reality; where abstraction begets abstraction for its own sake. For example, in his famous methodological treatise, Economics and Reality, Tony Lawson firmly demarcates the form of abstraction he recommends for economics from the practice of modelling – placing stress on the point where: “it seems vital that I indicate why the procedure to which I refer does not at all reduce to the activities of the ‘modelling’ project in question.” (Lawson 1997, 227)

For different reasons converging to the same end, advocates of the most active strand of Marxian economics, working from the Temporal Single System Interpretation (or TSSI), are equally averse to modelling, associating it with simultaneous, equilibrium interpretations of Marx’s labour theory of value, diverging from the correct representation of the theory. Inside the pages of Andrew Kliman’s recent book, Reclaiming Marx’s “Capital”, the word ‘model’ only occurs negatively in association with what he argues are flawed abstractions of the theory from Okishio’s theorem through much political economy in the 20th century (Kliman 2007, 44, 48, 66, 101, 176). The idea that Marx’s Capital might itself be considered a theoretical model of the economy is out of the question.

What explains this resistance to modelling for critics of the status quo in economics?

Continue reading