Episode Transcript
Xabier:
Hello everyone, and welcome to Ahead of the Curve, the LSEG Post Trade Solution podcast series.
Today, we'll be talking about AI. It's been, what, three years since the release of the first ChatGPT, and it's been a roller coaster. And I would say the last 6 to 12 months, we've seen a massive momentum in the quant and risk communities in terms of adoption of AI.
Probably motivated by the fact that the new AI agents are showing significant improvements when compared to their predecessors.
So today, we really want to focus on that.
We want to talk about real examples that we see the quant community, where the quant community is embracing AI.
We want to talk about how we are utilizing AI for our services and do a little bit of forward thinking.
I am Xabi Anduaga, a partner in the LSEG Post Trade Solution quant services team, and I have the pleasure to have two colleagues with me today.
On my right, Stuart Smith, who runs our risk services.
Welcome, Stuart.
Stuart:
Thank you.
Xabier:
And Joey O'Brien, who is a principal consultant on our quant services team. So welcome, Joey.
And I would like to start with you, Joey. So your day-to-day job is very similar to what a quant is doing in an investment bank or on a buy side firm. You basically deal with the development of models, the testing of models, calibration of models. What are the things that you are seeing these new AI agents now being better at, and what are the things that you are using them for? And if you can tell me a little bit more about what are the kind of things you see them working pretty well, but also areas where they're still not performing the way you would expect.
Joey:
Thanks, Xavi. So like you said, it's roughly three years since we've had the first kind of iteration of these models. At that point, they were, to be honest, quite dreadful in the quant space. They could not do anything in practice. Over the past year, I think, we have seen that really improve. So things like being able to translate trade, translate market data has been a huge improvement up to a year ago, roughly. And where a lot of the cutting-edge research in the quant space with these kind of models has been in the market data space. So things like using synthetic AI to generate missing market data series or hypothetical volatility surface, that has been quite cutting-edge research, and there's been a number of interesting publications. But even then, up until a year ago, it was not really touching quant libraries directly. And that was partly because the kind of frontier models that were available at that point really just weren't able to do it. They really struggled with implementing quant models or thinking about mathematical formulations.
Over the past six months, though, we have seen a huge improvement in what those models actually can do in terms of cutting-edge frontier agentic coding models. They really can start to handle development at this moment in time, whereby we are actively thinking about where we can use that to perform extensions within our library, and we think there's huge possibilities over the next few months of improving how these things are done using these tools. And I think if you look at maybe a contrast, when ChatGPT first came out, we do a lot of backtesting. And backtesting involves going back, looking at historical dates and understanding why they were exceptional days and caused problems with your risk numbers.
And then what we found was you could plug in dates and say, "Give me a description of what happened on this date." We thought, "This could be kind of cool. You could automate this." And it gave back this amazing answer. And we thought, "Wow." We put in a different date. It gave us back exactly the same amazing answer. Both were completely spurious. It was just hallucinating and coming back with exactly the kind of thing we wanted to hear. But this was two and a half, three years ago.
I think now we're using similar aspects, similar tools, and we're finding that they add an awful lot of value. Even that same task, they've transformed completely to be much more accurate and also providing much more context to the answers that you can easily verify it as well.
So yeah, I mean, that change is dramatic and definitely, I think we see how we figure those tools and bring them into what we do, whether it's in the development side or in the analysis side, it's going to be a key differentiator in how people analyze these things going forward.
Xabier:
And how much do you think it's not just obviously the LLMs getting better, but also there's a bit of work we need to do to make the right prompts, right? And I think that helps a lot.
Stuart:
Of course, that is one of the secret sauces, I think, in these models. It is developing correct prompts and correct agents to handle things. And what we are seeing is that groups that are actively doing that really are moving ahead with how these things are working quite fast. And so if anyone has used Copilot or anything like that, they have had either plan mode or agent mode.
That's specific agents that are developed to do specific tasks, right?
So those things are very good at planning or very good at implementing.
In practice, to really get ahead, you want custom agents within your own library. So having a portfolio risk agent who knows the details of your portfolio, of your library, how you do things, and you can really move ahead with that kind of engineering on the prompt side to make a big advance.
Joey:
Yeah. And I think it's really interesting in that you look at how open source is going to interact with this. Open source is going to be a really important differentiator. If you've got a closed pre-built library and you put it to your AI agent, it has similar limitations to what real world you can do with that. There's only so much you can understand and so much you can rework and extend. If you have all of the code base, it can be not so helpful if you're a standard developer who's just received a huge code base. You have to try and understand.
But for an AI agent, this can be amazing because it gives you the detail that you can't get otherwise and the ability to extend, which is completely different. So I think as a client, you're going to want very different things from your risk engine in the next few years than what you maybe thought you wanted five years ago.
Joey:
Yeah. And on that point, I think any frequent listeners to this podcast have heard us talk an awful lot about black box vendor libraries and the transparency that ORE, our own pricing library offers, that's really unseen across the industry, right? It's open source by name, open source by nature. The code base is publicly available. If you're wondering how we do an interpolation of a commodity volatility surface, you can see that exact line of code in the code base.
With those black box models, you can't really do that, right? You raise a ticket, and they answer your questions. And as Stuart says, if you have that limitation yourself, your army of coding agents will have the exact same problem.
They won't really know what's happening, and they won't be able to investigate the code. But if the agent does have direct access to that code base, it can be super sped up to improve things. And if you're thinking about using agents to develop new thingsIf it's a black box system, it can't do that development. Right? So looking at an open source library like Huari, that really does open up the possibilities for these agents to make improvements and make modifications to your own specific use case.
Stuart:
And I think we've actually seen our first clients who are doing exactly that. So I think we met with a client last week who had extended the engine themselves. Well, I say themselves, they had an agent do it for them. They said, "I want this function. I want this feature." They extended it. Said it worked, it delivered the results they expected. Probably not quite at the coding standard just yet. But it shows the way forward, and it shows a really radically different way forward to the traditional model as well.
Xabier:
And that's how we would expect, let's say, an investment bank or any financial firms that actually develop their own analytics to be able to have an agent that is capable of using the analytics also for the analyzing things, analyzing data, analyzing trades, but also to be able to code extensions to it.
Joey:
Yeah. Longer term, that's the goal, right? I think probably it's still not at that level whereby it can fully implement a quant model. Right? There's still a bit of a gap there. But if you had have asked me six months ago, was there a chance of that, I would've said no way for a long time. Now I'm much, much, much more confident that it's coming sooner than we think, and we have to kind of get ahead of that and accept it is happening, and the people that are best placed to do that with custom prompts, custom agents and open source code bases will benefit the most from this new era of agentic coding.
Xabier:
And Stuart, how are we using AI as part of our services, not just in terms of how we are using it internally to help developing new features and things like that, but also in terms of what we are offering clients that they can do?
Stuart:
Yeah. So I think we spoke on a previous podcast about the things we've done already. So we've been live for about a year now with our chatbot, which does some simple, straightforward tasks, but really useful ones.
So for instance, I made this trade representation for your engine. It doesn't quite come out like I thought it would. Can you tell me what might be wrong? And it's pretty good at saying, "I think you got this wrong. This isn't market standard. Try changing this." So at a basic level, we're already doing that, right? We're already putting those tools in the hands of clients to help themselves have a smoother journey when they're using the risk services.
I think when you look forward, really, we want to take that and we want to take that forward four or five steps. How much more can we do in the daily blocking and tackling of risk management and take that away from users, away from analysts who have to log into the system, and give them back? We've already analyzed it. We've already found what we think the problems are. Here are some proposed solutions that we think were going to work and get you back inside so you can go and do the really interesting stuff you want to do around risk analysis, around deep dive, around understanding the numbers. And that's stuff that we're already working sort of heavily in R&D on, looking back through the history of past cases to understand how we can solve those more simply in the future. And frankly, we think there's a really good chance that we can solve an awful lot of those problems.
If you look back at, I think the first time I sat down with a risk user, we just implemented a market risk system, and I sat down with him to kind of train him on the system we'd implemented and how he could use it. And he described his job, and his job was to come in, look at yesterday's numbers, look at today's numbers, and find all the really obvious errors that had flowed through from the various systems, fix them all up so that he could get a clean set of risk reports out the next day. And that is the reality for a whole bunch of junior analysts who work in this industry. That is predominantly what their day job has been for a long time.
That's a job that I think you can see is not going to be there anymore. That's a set of things that could be pretty easily automated to go through, understand those things, and then if you're able to chat between different desks, push back on the front office without having to manually do it, understand what trades are real, "Oh, that one's spurious. This is a 10 times error. That one got closed out yesterday but didn't make it through the cutoff period." If you can resolve those things, then you're coming in in the morning and you're already clean, you're already doing the actual job of risk management, not just this kind of cleansing that could take up half the day beforehand.
Xabier:
Yeah. And that's really interesting. And I think just for our listeners, we are recording this on March 2026. We know this evolves pretty quickly. And we will be seeing a lot more on this space very soon. And Joey, I know that there's maybe a couple of kind of deep dive in a couple of examples such as, for example, the calculation of standardized calculations or even things like dynamic stress testing. How do you see AI fitting into those frameworks?
Joey:
Of course, there are a number of low-hanging fruits I think that AI could probably do right now in terms of model development, right? So thinking about new models is probably still a bit of a gap for it, but taking an already prescribed model, like standardized calculation from a regulatory framework, that's the kind of thing an AI agent now could very quickly read that documentation and start an implementation, particularly when there's a set of tests, right, so it can make sure that it's doing things correctly. It's making sure it's mapping the equations correctly. So right now, we're waiting on a new regulatory model to come. But I'm very excited to see what happens when that does happen with agents, because I think there's a huge scope there for improving those implementations and really cutting down the time taken to make a start on those calculations internally.
Stuart:
Yeah. And you think, "Oh, well, these things are easy. They're regulatory, they're standardized." Something like a SACA calculation, surprisingly time-consuming to code because it's a product-by-product implementation. There are hundreds and hundreds of products to work through. Actually, again, an AI that can sit, interpret the rules, understand them, apply the various FAQs that have been developed across the industry, that's a real game changer in terms of how fast you can bring that to market, and also the transparency in that decision-making as well.
You say, "Oh, AI isn't very transparent." Well, to be honest, neither is sort of five, six quants in a room discussing it unless they're producing incredible documentation, which isn't always the case, right? At least the AI, it could be set to put out that information. Here's why I reached this judgment. Here's how I got there. Here's what I'm going to do next. So I think, yeah, transformative for this. And again, it's just going to change the landscape of what risk engines look like, what the value is that comes from different engines, and how quickly competitors can come in and do something different as well.
Joey:
And I think on Javi's second point there about dynamic stress testing, that's an obvious thing that these agents will do eventually for banks. So when you think about how stress testing is done at the moment, right, we look in a rear view mirror and we say, "What would happen if the '08 crisis happened again? How would your portfolio look?" And we look backwards or we say, "What happens if the Bank of England cut rates by 100 bps tomorrow?" Right?
We try to generate hypothetical scenarios, and then we stress our portfolio and it's done in practice everywhere, but it can be improved greatly. And I think one way that can be improved is dynamic stress testing long-term, and agents will be key for that. Right? So you could have an agent who, first thing in the morning, takes all the latest news and thinks about all the latest factors. What if there's an oil crisis? What if there's a new tariff introduced? And it can assign probabilities to them and generate those stress tests on the fly, which are much more realistic and relevant to your portfolio. And I think that is one of the first things that agents will do in terms of the quant risk space that really does change the industry.
Stuart:
And when we said at the start, they've always been notoriously poor for occasionally coming up with some spurious answers, and that's obviously got better. But again, in this case, for stress testing, that really doesn't matter. I want you to come back to me with five, six provocative cases and say, "These are the things that could happen. This is what would happen to your book. What are you going to do next?" And we talk to really effective risk management teams. This is a lot of what they do. It's not just looking at the numbers, fixing them up, and getting them published. It's going, okay, based on that, if that happened, what would we do? How would we exit from that scenario? How would we come out on the winning side of any sort of market event? Do we have a strategy for that?
And again, something external, being able to push you and say, "Here's some really viable things that could happen," and translate those into scenarios you can implement, which is such a hard part of what you do. That is a huge task that takes, to do well, really, really bright, intelligent people because it's hard work. You need really detailed knowledge of the engine, the scenario, how those factors interact. That's a complex piece of stuff, but actually something AI's probably going to be really good at. Data-based, rule-based, detail-based. This could be something that's really transformative and yeah, to be able to say, "I want to understand these five big macro scenarios tomorrow. Go write me a stress test, run it for me, bring me a report back." It's game-changing, right? In terms of the capability you can have.
Xabier:
And that really fit nicely into the second part of the things I wanted to discuss, which is to do a little bit of forward-thinking. And you mentioned a little bit some of the things that or how we are kind of embracing AI as part of our services. We constantly release new services, new features. How do you see some of that evolving? But also if we're a bit bold and think about in two, three years' time, how do you see kind of a risk system being in track with AI, basically?
Stuart:
I think there's so many things that are changing. It's an incredibly hard question to answer. I think there are a few key things that you can take away from it that are going to be key, right? One is the desire from what a risk team wants from their risk engine is going to change. Five years ago, maybe they wanted an amazing user interface, an easy-to-use slice and dice tool, a way they can analyze their data, and run things. And that's maybe going to move away from that because probably you're going to have fewer people running this, empowered with more agents.
So then maybe what becomes more important is the APIs that sit behind and the APIs that are accessible to various large language models and agents that are out there that can then do things that are more interesting for you. So, for instance, an API to create a stress test, an API to run a set of calculations, and these kind of APIs that can then be used by those large models to run different analytics for you. You can imagine someone sat there simply saying, "Okay, here's my VaR for today. Okay, what happens if this moves to this and this moves to this? Can you rerun those calculations? Can you produce me a report that looks like this?
And can you give me some sample hedges that I might want to put on against those scenarios?" That's potentially a full day or a full week of work today for somebody, and that's something that really you think that's going to be something done by a model, and that's going to need a different set of APIs, a different set of capabilities to make sure that it can run them.
And at the same time, you've got cloud compute just slashing the price of standard compute every week, right? The standard compute has got incredibly cheap. So that paradigm, which was you had to be quite efficient about the way you ran everything, don't run things twice because it's too expensive. A lot of that isn't quite the same anymore. Again, you can just think of blasting these things out into the cloud, having big compute bring those answers back for you in an asynchronous way, managed by the model again. It's quite a different look and feel to how that risk system could work.
Xabier:
And Joey, on your end, how do you see the quant role evolving in these next two, three years? What are the things that you expect institutions will expect from a quant versus what they are doing today?
Joey:
Of course, it will change massively, to be honest. I think even right now you're feeling that. So if I think personally, two, three years ago, 50% of my time was spent writing code of some sort. That's down to less than 10 at the most here. And that is because now it's much more about making plans, providing prompts to agents, reviewing their code, and that has really changed the role, right?
We do not write as much code as we used to, and that will continue to be the trend, I think, over the coming years. And what I think the quant role will really change to, particularly in the development space, is that rather than being an actual developer themselves, they will become closer to a project manager, whereby they have 10 or 12 agents working on developments. They make plans, they review the work those agents have done and make sure the testing looks correct, and that will become the role. That will be really important to be able to do that efficiently and effectively manage those different coding agents to do tasks.
And one thing that's quite nice is that, as we've said, the more mundane kind of tasks day-to-day, right, doing VLOOKUPs or LEFT JOINs to do a model validation report, those things are disappearing, and they will save a lot of time, as Stuart has kind of mentioned there. And once these quants have this free time or additional capacity, that gives more opportunities to investigate deeper line questions, right?
So new models, new frameworks, and bigger questions rather than the more mundane exercises. So I think there's potential here for a huge amount of scope in terms of new models, new frameworks, and interesting research in the quant space.
Stuart:
Yeah. Maybe there's two problems that have been there since I started working in this field that are, for me, still largely unanswered, right? So reverse stress testing. How do I find the scenario by which is a catastrophic scenario for my book?
And again, we've sort of thrown around different ideas and different models for this, but no one, to the best of my knowledge, has ever come up with a really viable solution. Again, this feels like an area that could be transformed, and we know reverse stress testing would be such a big driver for the industry. So this is one. The other one is wrong way risk.
Again, wrong way risk, really complex topic. Really sort of present, right, in the current way we look at things. And again, how could a model go out and help you find the scenarios where you think that you should look closer here? Because this is susceptible to wrong way risk in this way. Again, this kind of large data, large context requiring thing as well, could be really transformed by that. So yeah, really interesting.
Joey:
And I think the things that these models are historically very good at is finding patterns, right? So they're very good at looking through streams of data and doing these kind of estimations and finding new patterns which we've missed to date.
So if you think about the industry right now, the Black-Scholes model, thatblooked at return data and fit in a mathematical framework to do that.bBut that's because we saw that distribution of returns at that point, and we looked at that pattern and how we could enhance our models based on that. There could be a lot more hidden distributions and hidden patterns that we have not seen yet. And these models very quickly could find them and totally change how some of these foundational models really are constructed and things we've missed to date.
Stuart:
So maybe come back to also how do I think this will affect a quant? Maybe take our specific example where we're not just quants of any engine, we're quants of an open source engine. I think we could find our role quite radically transformed. We've already seen the first clients developing on it with agents.
They're probably going to want that code to come back into the base system. It's quite possible we're going to see what is today a really nice group of people who gradually evolve that code base exploding out to being a very large group of people who are able to read, understand, and develop that code base, which is amazing. That's what you want open source projects to be. But managing that as well is going to, I think, become quite a challenge of how do you keep the kind of coherence of the library and take advantage of all of the code that's available.
Joey:
Yes, and on Stuart's point, every time there's a new invention, really, in the coding space, so when you think neural networks, there's always a core library at the center of it. So TensorFlow, right, in that case was a huge part.
And as we move forward in the agentic coding era of quant finance, there will have to be a centerpiece open source library to be the foundational part of that. And I do not see an alternative right now except for ORE across the industry, so I think we are extremely well-positioned to be the foundational large language quant library as agentic coding really takes a big part in the role.
Xabier:
Thanks, and that sounds really exciting. Definitely something that we look forward to.
Thank you, Joey, and thank you, Stuart. Thanks everyone for listening to us today. You can find us on Spotify, YouTube, and also on our website, lseg.com. Thanks again for joining us, and we'll see you soon.