“Financial climate data needs to be shaped, it needs to be curated – but by who?”: An interview with Dr Julius Kob
Graphic rendering of a big city with high-rise buildings in vivid colours. In the centre, you see an illustration of the Earth pictured in the streets among the buildings.

Data can be understood as a representation of something which, in any kind of way, needs to be produced, needs to be corrected, needs to be shaped; it needs to be curated. All these kinds of things need to happen to it in order to make sense of it as a whole; what it should represent and what it ought to represent lies in the eye of the beholder – Julius Kob

Asli Ates (University of Sussex) and Robert Bergsvik (Wageningen University) interview our seminar speaker, Dr Julius Kob (Warwick Business School), about his recent work on financial climate data.

Rob and Asli: So, how did you become interested in data and knowledge infrastructures in finance?

Julius: My undergrad was a major in sociology and a minor in psychology at the University of Hamburg, and there I had heavy exposure to scholars such as Foucault and Deleuze. I started my undergrad in 2007, and one year in, the whole financial system seemed like it was imploding with the 2008 financial crisis. This sparked my interest in financial markets. However, in the German-speaking sociological literature, there wasn’t too much on financial markets. It felt very, very niche. I wrote my undergrad dissertation on the financialisation of the self without actually working with the concept of financialisation as it is known today. I just picked up on it from Paul Langley at the time. What really was interesting for me at the end was using one little case study on credit risk ratings, suggesting how big data, including social media data, represented this lavish land of data points for more retail credit default risk calculation. To me, this seemed like an absolute nightmare. Absolutely terrible. However, it led me to pursue a master’s thesis on the subject of big data in credit risk assessments from a Foucauldian perspective at the London School of Economics. So, I worked with perspectives on data even though I did not engage much with science and technology studies at the time. I was, however, introduced more to this in London.

Then I got to know Martha Poon, a great scholar. She’s done work on the FICO score, which is the original US credit default risk score. She happened to be a visiting scholar at the School at the time, and I sat down with her, and she talked to me for four hours. I think in those four hours; I may have spoken for 10 minutes or something. The rest was all her, and it was incredibly helpful and instructive. She told me that a lot of work is being done on big data and credit scores already. If it was a matter of getting the degree, then by all means, do it, but if you want to do something interesting, she said, I have this mental shelf of unhatched eggs of research ideas. One of these eggs was catastrophe risk modelling. I was fascinated, and so I took it. Martha and I then went to an event at Lloyds in 2014, where an open-source catastrophe modelling platform was released for the first time, financed by Lloyds and a lot of other big firms in the insurance industry.

I don’t know if you’ve ever been in the Lloyds building, but it’s this Bowellist architecture, looking like a refinery in the middle of the City of London. When you go downstairs, you go into this old library of Lloyds, an old wooden place in this gigantic metal and glass building. It’s really weird. Martha pointed out to me which of the people in the crowd were commercial risk model vendors, who were the reinsurance and insurance people, and so forth. And we observed how they discussed the issues and needs around catastrophe modelling in finance. It was really fascinating. And I just chatted with random people afterwards over a beer, and then one of the vendors took us to their office. While they showed us around, I saw this poster with the title ‘Mortality Summit’. I thought, “Wow, that sounds gruesome!”, but obviously, it made sense because that’s the oldest field of insurance risk calculation, so an analytics provider for this field is, of course, engaged in this risk class. That’s where a lot of risk statistics actually come from mortality tables. But they were primarily providing risk modelling for ‘natural’ catastrophes, which really interested me. So, I got into that and wrote my master’s thesis on it. This is what got me into science and technology studies, specifically catastrophe finance and modelling, here. And I stuck with it. And I’m quite glad that I did because until today, there are not too many people who’ve covered this, and I think it is very informative to the whole issue of climate-related finance as well, which I got more into in my postdoc work at Warwick.

Asli Ates and Robert Bergsvik: A lot is going on in finance when it comes to sustainability. Can you help us understand how finance is responding to sustainability or climate change issues?

Julius: So the idea, coming especially from Mark Carney, the Bank for International Settlements, and Michael Bloomberg, is that capital markets enable economic activity in the real economy and that capital flows can, if applied accordingly, shift the world towards sustainable climate pathways – whatever that means in detail. So in the Paris Agreement, we have this as a fundamental pillar of how market societies imagine managing the climate crisis – I think we should be very careful with this promise, but that’s what drives climate-related private finance today.

My colleagues at Warwick Business School and I are primarily looking at the on-the-ground practices in the field. This is a relatively big, four-year project led by Katharina Dittrich, who comes from an organisational studies and practice theoretical lens. We are doing primarily ethnographic field research at different financial institutions, financial service providers, investor initiatives, NGOs, and so forth. Katharina was already very interested in STS when she planned the project, and then, I guess, reinforced this perspective with also some social studies of finance and economic sociology lenses. Matthias Taeger, who joined the project last year, also comes from this perspective with a strong additional focus on governance. And through these perspectives, what we see in the field is what we started to call a socio-material “Trinity”. Most of the stuff that people are doing in climate-related finance today and for a few years (we started our observations in 2020) tends to be concerned with the intersection of three different but very interconnected and interdependent things: frameworks, metrics and data.

Frameworks are all sorts of emerging guidelines around climate-related finance, such as new reporting frameworks on climate risks, carbon accounting, and climate target-setting protocols. They tend to be rather high-level – sort of policy- or industry-induced – but have a technical focus on how financial institutions are supposed to deal with climate-related risks and what they should do to ‘align’ their activities and portfolios with more ‘sustainable’ climate pathways – like the Taskforce on Climate-related Financial Disclosure, TCFD, the Net Zero Asset Owners Alliance, NZAOA, or the Glasgow Financial Alliance for Net Zero, GFANZ.

And then you have metrics. Coming from the Callon-inspired STS perspective, we can call these devices – any kind of model or metrics or whatever. TFCD [Task-Force on Climate-related Financial Disclosures] established the weighted average carbon intensity, or WACI, metric to calculate and normalise carbon footprinting for portfolios. This is pretty much established by now, I think, is not very much questioned anymore, even though it probably should be. Or things like ‘Climate Value at Risk’ metrics, which is primarily an MSCI product [formerly Morgan Stanley Capital International, a financial services corporation] – even though it was Carbon Delta who created it. This is also a dynamic [force] in the data and analytics industry in this field. A lot of small, specialised, niche companies get bought by bigger players. So, Moody’s, for example, has become a bigger player in the field too and has been buying up a lot of things, including RMS [Risk Management Systems], a leader in catastrophe modelling, which is sort of being folded into the climate space now. Now, I would say this market is probably not entirely consolidated yet, but it’s really much more consolidated than, say, five years ago. This concentrates a lot of power around analytics and data at a smaller number of bigger players here.

So, we have different metrics, we have frameworks, and then you have the data. Data can be a lot of things and includes generic financial and economic data, such as company revenue, capital expenses, ownership relationships, and so forth, but here we are primarily talking about company emissions data and also, by now, companies’ climate target and planning data. This data is both reported through frameworks and feeds into the metrics.

So, this is a kind of triangle of things. And the way we think in [our] project is that they are interdependent, and they influence and predetermine each other quite a lot. Finance kind of uses data essentially as a sensing thing, like a representation of what is out there in the world. And then they kind of make sense of that by putting it in a particular context and interpreting certain aspects of it by feeding it into different metrics. What kind of data gets reported and is out there, if it is available or not, is mainly determined by some of these frameworks. They also take part in determining what kind of metrics are considered useful or not and what data can or should be fed into them. What kind of metrics development is pushed forward is also partially very much dependent on the frameworks. A lot of stuff on the practical side of things happens in between these three things at the moment. To what degree this is actually helping to deal with the climate crisis is a different question with a rather depressing answer so far.

Robert: It sounds to me as if actors like MSCI are becoming intermediaries of governance processes.

Julius: I don’t know. Yes, in a way, you could say that because it is in a world where nothing or very few things seem to be settled. And again, knowledge production around these kinds of risks is sort of the primary way how finance starts ingesting the problem of climate. And a lot of knowledge and information is primarily produced by these commercial providers. I don’t want to overplay data providers and analytics providers too much. But obviously, coming from this kind of perspective on [catastrophe] modelling and using this space as an example that was established already in the 90s, I would say that such analytic providers can enable markets in the first place. This certainly was the case for the catastrophe risk market as it is today, so that was really important. And it shaped a lot of things, including underwriting and capital management, and, as I argued in the SKAPE talk and in my PhD research, also at least partly how hazard-prone areas are maintained and disaster is co-produced. This very central position of these providers in the knowledge infrastructures around disaster risk means that they sit between public and private, company-owned information and data. So the knowledge they produce is always somewhat proprietary, and their models and metrics, at least at the core, opaque.

And I think the same could become true for climate finance, whatever that is, at least to the extent of how to make sense of it, because the idea is to implement climate knowledge in practice into financial organisations via the notion of risk. A lot of their practice is organised around analytics; that’s where their competitive edge comes from. It’s primarily (or often) produced by knowing something others don’t have. There is information asymmetry, but it’s more about interpreting data in the right way or not, whatever the right way is. So this is not at all the case yet in climate finance. Most of this is used for reporting and climate target setting. But what does it mean even for this? And if climate ever becomes a KPI in investing, the knowledge produced around it will become a factor in competition.

Robert: Continuing the issue of data, there is now an increasing focus on making use of Earth Observation data to address climate risks. Based on your work on insurance, do you think this industry might make use of this data? And what can be the implications of this?

Julius: I’m pretty sure that they’re using it already. Insurance, especially reinsurance, has become this gigantic field of knowledge production around environmental risks, including the data they, and only they, have on the objects they insure. Actuarial practices are, after all, highly data-driven. There’s a tendency, especially for reinsurance shops, to buy smaller data-producing companies and integrate them into their internal data flows. They may also be propping up certain companies that sort of build up such surveillance systems in a way. But it’s not just about having the technical knowledge on how to process data. Insurance companies also can use this data to scale their business. Because if you invest in a particular company that can kind of provide data on a particular sort of place, then you can invest in it, grow the company, and then do the same thing for other regions. And, again, large service providers in this industry have been buying up start-ups specialising in satellite and other data for a while already.

Robert: So, you have a new market here, right?

Julius: Yes, exactly. I know a few people who come from the whole catastrophe insurance space who are either starting remote sensing companies or using the money that they made as venture capital to support the development of such companies. This starts obviously in very developed regions prone to hazards, such as California, where there are a lot of earthquakes. These companies collect information on soil conditions, liquefication etc., using affordable sensors. This information is then input for risk-based pricing. I think the danger here, and this is my question back to you, given that data is not neutral but produced for specific purposes, to what extent is the data already shaped by the demands of the epistemic regime that it is supposed to flow into? Specifically, in a risk knowledge-based system, on which market societies rely quite fundamentally, insurance is the place we turn to, also governments, when we want to know about places at risk, how much damage has been incurred, how much loss has occurred, etc., not to mention public-private insurance programmes. If (re)insurance actors are deployed, either as just advisors or organisations we want input from or help policy and governance directly, how much of this will already be embedded into financial risk management and market imperatives and rationales?

Robert: Those types of questions are exactly what I am trying to address and build an understanding of in my research. Because it becomes a particular way of seeing or knowing a problem and how to go about addressing it. 

Asli: Considering that your work deals with criticism around proprietary practices of data in finance, what are your thoughts on the promise of open data practices, and do you think “open finance data” would be more beneficial for wider society?

Julius: It’s a great and big question. Because there is this kind of implication that public data is always good, right? And I would say, yes, but there is a ‘but’… Because the problem is that data is not just lying around, right? It’s not something you just pick up on the street, and then it’s all comprehensive or makes total sense and so forth…

Data can be understood as a representation of something which, in any kind of way, needs to be produced, needs to be corrected, needs to be shaped; it needs to be curated. All these kinds of things need to happen to it in order to make sense of it as a whole; what it should represent and what it ought to represent lies in the eye of the beholder. To use the example of greenhouse gas emissions, something we explore in the project at Warwick: you very rarely actually measure emissions with a sensor or so, but instead, it gets calculated by, ideally, whoever emits it – that’s already quite an intervention that produces and shapes companies’ carbon emissions data. But in and of itself it is not very helpful, because how much is little or a lot of emissions? And so even very granular emissions data, what also needs to happen to it is that it needs to be set into relation with other emissions – this is carbon accounting at the end of the day. I’m not an accountant, but accountants very much know that: producing numbers or data, constructing accounts of them and then comparing them to other accounts. That’s the whole deal. To make things comparable to have some kind of consistency across accounts and so forth. So data needs to be made and changed in order to fulfil a particular purpose. But who does all this producing, shaping, curating, etc. of data? Who creates accounts that are somewhat consistent and comparable?

So, within the climate space, to approach an answer to the first question, I think, yes, there are definitely going to be public data repositories; there are a few in the making. I’ll name three examples. One is the European Single Access Point, which is planned by the European Commission.

The second one is the Net-Zero Data Public Utility (NZDPU), which was initiated by French president Macron and Michael Bloomberg. It is essentially pushed and managed by GFANZ [Glasgow Financial Alliance for Net Zero], and it’s right now in the making. And I’m actually supposed to be part of the NGO and Civil Society Advisory panel from this year onwards — it’s going to be very interesting to see how this is going to work out. Because what they’re essentially trying to do is to define all the data points they want to have data on. And then actually what I assume will be an amendment to reporting frameworks, and especially the ones that turn into regulation at some point, and then have this repository in a public way that everybody can sort of go about it. But then, the question is: who’s going to maintain this whole system? Who is going to produce, shape and curate the data so that accounts are somewhat consistent? (Let’s ignore the larger and more important question of a financially-induced and purpose-driven way of data shaping for a moment). If you read through the proposal and the details of this report they published, the number of data points they want to have is going to be massive. And again, somebody needs to maintain and manage the system and the data. The question is, who is going to do that?

The third one would be OS-Climate, which is an open-source (OS) data platform that’s been initiated by the Linux Foundation, with a lot of drive from people like Mark Carney and David Blood, and it’s a creature of many involved companies, such as Amazon, Red Hat, I think Microsoft is in there as well. Goldman Sachs was in there for a while, and I think it is still supporting it. Allianz, the German insurance company, is very much invested in that. Ortec Finance, a Dutch data and analytics provider, is very active. So it’s a very tech-driven sort of thing. And they have different working streams; they focus on different things, also metrics. One of them is what they call Data Commons. I think they’re not necessarily wanting to give all stuff away for free, but I think they want to build this platform where reporting companies and providers can then plug in data. And then say, “Okay, this is like free stuff. And here’s sort of our additional stuff that you can buy.”

And so, whichever of these things [the three emerging open data platforms/alliances], or maybe other emerging things will make it, I don’t know. But there’s a lot of weight put behind, especially the first two. And again, I don’t know to what extent the European Single Access Point and the NZDPU are going to be linked. I think, in the end, the NZDPU is kind of one level above because it tries to source regional data and curate it on a global level, so not only EU but also from Japan and so on… So, this is happening.

Asli: But as you say, data is just not lying around…

Julius: Exactly. It needs to be not only collected but also curated and changed, and so on. And especially because reporting regimes differ by region and are not always mandatory, and companies report differently well or granularly. We’ve looked very deeply into emissions in particular because that’s fairly established, and you can see already how much work it is, primarily by data providers, to sort of streamline the data. It is starting very much now with climate target data, too, because that also is going into a lot of metrics. And you can see that target data is super unsystematic because it’s fairly new, and there is no regulation around it yet. So, if you cannot really compare a company to another company, or a project to another project, because they report or set targets differently – although they get better at it – somebody needs to manage this data as a whole. Tentatively, we’ve called what data providers are doing these days ‘minting work’ in analogy to the classic concept of Stinchcombe and Carruthers. But in order to make things comparable to one another, you sometimes need to change the very thing that you want to compare to one another, which means that you need to create an account or, in this case, an account of accounts, which is a consistent and comparable repository of comparable accounts. Whoever is managing this account of accounts is going to have a sort of closed system in a way that is consistent in itself, and so you depend on them. But if you have two or more of them, they will be different to one another. In that sense, it would be a good thing to have one massive one that is in itself consistent, but I have no idea how they want to do that. Because the work that goes into only emissions accounting or building these accounts of accounts only for emissions for reported emissions is already massive. But I’m fairly sure that public repositories are going to come one way or the other.

How much will it be used? I don’t know because the expectation, I think, in the market is that there will be a Bloomberg or FactSet or Yahoo Finance for climate-related data. And they already are there: MSCI, ISS, Sustainalytics, S&P Global, Moody’s – actually, Bloomberg itself also has such data, too. And their data all differ to various degrees, and for good reasons – I’m not saying that they manipulate anything with bad intentions; they just do things slightly differently, which is the proprietary aspect in all of this. But what they do, this ‘minting’ of climate-related data, is necessary. I would say, and this is maybe a bit provocative, that even if we had great reporting regimes and great adherence to it, and companies were really great at reporting climate-related things exactly the way they should, there would still be work necessary to make them comparable. And because we are talking about investee companies of financial institutions, this is super important for investment portfolios. Not only for financial firms’ own reporting on so-called portfolio emissions, for instance, or their climate targets, but also on investment decisions. Because when you then want to compare one company to the other, which one should I buy, hold, or sell? I think there’s a lot to do.

And I don’t really see this happening without any kind of proprietary influence, I think. So, however, the planned public repositories are going to collect, shape, curate and organise climate-related data. It surely will be informed by how these already existing providers of such data are doing it, with which financial institutions are already working. And for now, we are only talking about emissions and, to some extent, target data, which are used to project companies’ future emissions trajectories. We are not even talking about the data on physical things such as where companies exactly have their facilities and whether they are going to be at risk from flooding etc. Where exactly are all the different Toyota factories? What kind of typography are they in? I am pretty certain that there will not be a global public surveillance system that is able to identify all these kinds of things in a way that is usable without any proprietary intervention. And when you then add the important aspect of risk models and their continuous calibration, their further development, that is based on such data, I start to see a pattern. This is where it may become similar to the catastrophe risk space again, where the most crucial knowledge is produced proprietarily and is fundamentally shaped by financial markets imperatives, which do something very real to the thing they are risk managing.

 

Head shot of Julius KobJulius Kob is a Research Fellow at Warwick Business School, where he investigates the design and use of climate-financial data, tools and frameworks in the financial investment industry. He received his PhD in Sociology from Edinburgh University for research on natural catastrophe modelling in (re)insurance and capital markets. Julius is a member of SKAPE and has been a visiting researcher at the Robert L. Heilbroner Center for Capitalism Studies at The New School and the Center on Organizational Innovation at Columbia University. He holds an MSc in economic sociology from the London School of Economics and Political Science and a BA in sociology and psychology from Hamburg University.

Headshot of Asli AtesAsli Ates is a doctoral researcher at the Science Policy Research Unit (SPRU), University of Sussex and working for the ERC-funded EMPOCI project under the supervision of Professor Karoline Rogge and Dr Katherine Lovell. In her PhD project, she particularly looks at the role of data (and policies) in an increasingly connected electricity and mobility systems for accelerating sustainability transitions in the UK. She employs mixed-method research in her project and experiments with novel approaches for methodological advancement, including natural language processing techniques. She holds an MSc in Sustainable Development from SPRU, University of Sussex, and a BSc in Management Engineering from Istanbul Technical University. She also worked on several projects in academia, private and non-profit sector.

Headshot of Robert BergsvikRobert Bergsvik is a PhD researcher in the Environmental Policy Group at Wageningen University. Robert has a background in political science and international relations, with a focus on critical political economy and global governance. Originally from Norway, he has a master’s degree in political science from Stellenbosch University in South Africa. Before joining Wageningen University, he was a research fellow with the Science Communication & Society group at Leiden University, where he analysed the culture of public engagement at Dutch research institutions. He is one of two TRANSGOV PhDs, with a focus on unpacking the dynamics of international climate finance from the perspective of transparency as a tool of climate governance. This includes interrogating the role of digitally-enabled ‘radical transparency’ in pushing new directions for climate finance, such as the introduction of parametric climate risk insurance initiatives.

 

Cover photo credit: This image is The City of the Captive Globe, a vision of New York produced in 1972 by the architect Rem Koolhaas and Zoe Zenghelis, one of his colleagues at the Office of Metropolitan Architecture, with the artist Madelon Vriesendorp.