A guest post from Nathan Coombs who is an incoming Leverhulme Early Career Research Fellow at the University of Edinburgh. He edits the Journal of Critical Globalisation Studies, and is the author of the forthcoming book, Politics of the Event: From Marxism to Contemporary French Theory (Edinburgh University Press, 2015). His current research interests are in financial algorithms and financial regulation. He can be contacted at n.coombs (at) fastmail.co.uk
Over the last decade, scholars have become increasingly interested in what we do when we make use of models and simulations. An emerging consensus – often legitimated through reference to Bruno Latour’s Actor-Network Theory – is that mathematical models and computer simulations are not passive tools but rather a material force in their own right. Agents may employ such technologies in order to achieve pre-determined ends, but the technologies themselves have an effectivity that exceeds their users’ intentions, and set in place path-dependencies that serve to proscribe the range of political and economic possibility.
This concern with the politics of technology cuts across multiple disciplines including Sociology, Communication Studies, International Relations, International Political Economy, and Management Studies. However, the Social Studies of Finance (SSF) has perhaps gone furthest in exploring the practical implications of modelling and simulation technologies. Applying Austinian and Barnesian notions of performativity, researchers in this field have sought to grasp the way in which economic models shape markets, and to dig into the mathematical and technical details that underpin this process.
Donald MacKenzie’s book An Engine, Not a Camera (2008) is exemplary of this approach, and a common point of reference for scholars in SSF and all the aforementioned disciplines. In his analysis of the development and uptake of the Black-Scholes option-pricing model in the 1970s, MacKenzie aims to show how the model’s employment of the efficient market hypothesis – where stock prices are considered to accurately reflect their risk – led to a period in which the pricing of options came to reflect that predicted by the model. The point of MacKenzie’s analysis is not to endorse the neoclassical economic assumptions codified in the model. Rather, it is to point out how models serve to socially facilitate evaluation practices in the face of complexity, uncertainty, and epistemological opacity. On this basis a model can also contribute to financial instability when it is both widely employed and based on assumptions that are confounded by ‘real world’ contingencies.
One of the broad implications of the approach pioneered by SSF is that there is value in a micro-constructivist orientation, grounded upon in-depth empirical work of a predominantly qualitative nature. This is not meant as an alternative to using macro-level, longitudinal data to infer broad trends of critical import. Rather, the point is that opening up the micro-level as a site of investigation can offer insight of a more than merely descriptive type. Credit rating agencies’ calculations, central bank models, trading algorithms, regulatory architectures – all these ostensibly technical ‘market devices’ constitute, in this view, ‘objects’ with a history, an intimate connection to social practice, and an implicit politics. Yet only by exploring them through a commitment to the empirical – and avoiding, for instance, seeing them as mere instantiations of the calculative logic of instrumental reason (or neoliberal ideology) – do they promise to yield unexpected findings of both empirical and theoretical significance. These findings may appear slight compared to those afforded by a macroeconomic perspective, but through a micro approach scholars can help to better elucidate the nuts-and-bolts of our social system. The upshot, then, is that the SSF approach can form an important part of broader efforts to understand and question the logics of our contemporary world.
Let me use the debate over financial regulation as an example. It is quite common in the extant literature to assume that financial regulators become ensnared in a process of ‘regulatory capture’, wherein they end up representing the interests of financial institutions rather than those of the public (which they are officially entrusted to protect). Much of the controversy over the growth of high frequency trading (HFT) and market fragmentation, for instance, is often seen through the prism of collusion or complicity between regulatory bodies and HFT firms. By creating a market micro-structure favourable to the latter’s activities, market stability suffers, and so too does the chance of the ordinary ‘woman on the street’ having a fair shot in the stock market. Or so the argument goes.
In my own research about the next phase of Europe’s Market in Financial Instruments Directive (MiFID II) legislation, I have seen a more complex picture emerging.
MiFID II proposes supervisory measures such as algorithmic registration, algorithmic testing procedures, and training protocols for those who operate algorithmic trading systems. Initial industry outcries over its ‘draconian’ measures seem to have given way to a more sanguine attitude. Much of the debate has focused on the definitional nuances of what constitutes algorithmic trading as opposed to HFT. And the compliance guidelines proposed by the European Securities and Markets Authority (ESMA) remain vague about what algorithmic registration, testing and training actually constitutes in practice, with the details delegated to national competent bodies and the compliance departments of trading firms. Overall, it is a complex picture, full of ambiguities and demanding further empirical attention in order to cohere into a satisfying explanatory narrative.
Although none of this complexity leads me to discount the ‘regulatory capture’ hypothesis, I believe it does indicate the likely inadequacy of any explanatory framework that is merely taken off-the-shelf. Perhaps regulatory bodies are at a loss as to how to respond to the epistemological problems posed by regulating algorithmic technologies that interact dynamically with one another across markets. Perhaps there is a genuine political desire in Europe to reign in the excesses of financial markets, but uncertainty over how to best implement such a plan under current technological constraints, and in keeping with the legal structure of how European financial regulation is enforced. Perhaps regulators remain beholden to an image of how stock markets work that has not yet considered how different the predominantly algorithmic trading environment has become, and are proposing rules that may have bite but no teeth. The point is that it is impossible to know in advance of empirical analysis concerning all these technological, legal and practical details.
Certain critics of SSF and its methodologies – such as The Guardian’s Aditya Chakrabortty or the renowned historian of science Philip Mirowski – have argued that the field tends to get lost in the ‘cogs’ of its studies, and promotes an apolitical approach which by de facto endorses its object of study rather than living up to the duty of academic autonomy and critique.
Mirowski, for example, claims that the “effect of the performativity idea on that literature is to have sociologists repeating and recapitulating economists own stories and never challenging their accounts. They never compare what they say they do with what they really do”.
In this view, the performativity thesis merely affirms the given frameworks of modelling practices, albeit through ever-higher levels of empirical detail. With their focus on technical mechanisms, cultural practices, and institutional structures, SSF scholars are charged with losing sight of the ideologies and interests at play.
While it would be churlish not to concede that the dangers these critics point to are real ones, it was our conviction at the Journal of Critical Globalisation Studies (JCGS) that scholars looking at modelling and simulation practices are making a valuable contribution to the study our contemporary world. What is more, we set out convinced that there was no essential incompatibility between, say, critical International Political Economy and approaches more closely connected with SSF. We deliberately left the focus of our issue on ‘Modelling Worlds: The Politics of Simulation’ open to a diverse number of perspectives, methodologies, and disciplinary groundings in order to invite dialogue around the subject area. Hence, although articles about economic and financial models constitute the core of the issue, we also have an article about the artistic simulation of off-shore financial practices, and a critical theory piece taking issue with the claims to objectivity of ‘big data’ social media research – a display of diversity which we believe demonstrates our ecumenical commitment to a broad conception of critical scholarship. Providing a site for interdisciplinary exchange has always been a prime objective of the JCGS, and this is something we will continue to pursue with future issues.