Artificial Intelligence and Tokenisation in Post Trade Solutions: Current Applications and Future Plans

Episode 34 February 03, 2026 00:45:41
Artificial Intelligence and Tokenisation in Post Trade Solutions: Current Applications and Future Plans
Ahead of the Curve
Artificial Intelligence and Tokenisation in Post Trade Solutions: Current Applications and Future Plans

Feb 03 2026 | 00:45:41

/

Show Notes

This episode of Ahead of the Curve offers a forward-looking perspective on how AI and tokenisation will redefine collateral management. Learn how Post Trade Solutions AI initiatives, from natural language search to predictive insights, are helping clients reduce manual tasks and focus on strategic risk management. Explore the intersection of AI and distributed ledger technology, and how tokenisation could enable real-time settlement, optimise collateral usage, and mitigate systemic risk. With regulatory considerations and client needs at the forefront, Post Trade Solutions is charting a path toward a more agile, intelligent, and secure post-trade ecosystem. 

View Full Transcript

Episode Transcript

Hello everyone and welcome to another episode of LSEG Post Trade Solutions' Ahead of the Curve. I have two really good, awesome speakers today with me, because we're going to be speaking about two really awesome topics. No surprise, AI, which is what everyone's been talking about, all over the place all the time, every day. We're going to be diving into that and really specifically on how AI is deployed in LSEG's Post Trade Solutions business, specifically to Acadia. I'll caveat that, across London Stock Exchange Group, AI is everywhere and prevalent, and there's a lot going on in that space. But for this particular conversation, we're going to be talking about it, how we are deploying it in Acadia, how we're using it, and really looking forward into the future. And really the other topic, that is kind of on everyone's minds in FinTech and in finance is tokenisation. It's not a new topic. It's something that we've been hearing about for quite a long time, goes along with DLT technology, but recently we've been hearing a lot from regulators, and from the industries in general around the use of tokenised assets for collateral, tokenised money market funds, and then how that's going to work and create efficiencies throughout the industry. So, we want to unpack that. We want to take a look at how that's being deployed as well. So, we have Will Thomey with me, he's a Co-head of business development, and Jake Ullman here, who is kind of our AI expert at Acadia. He's been working on a lot of things for quite a while now, I think. We'll talk about the AI chatbot and so on and so forth. So, hopefully this is a really good conversation and really how the two might tie together. We'll explore a little bit of that. But, I think we're going to have a good conversation. Jake, I'm going to hand it off to you first. Can you just, before we jump into how the future's going to look, can you just tell us a little bit about how Acadia has deployed AI, what you've worked on? And then we can talk a little bit about, where it's going. Yeah. Thank you, John. Yeah, so we've chosen to tackle a few problems over the past few months. We've been working on it for about a year, year and a half, and that's of document searching, a very classic 2024 / 2025 problem now of companies have a lot of files, they have a lot of documentation that clients want to see and it's not always the easiest to search on it. So, we've, we've decided to tackle that one first, and we've seen clients just love it. So just a way easier to interact with our system. So, what we did is, for our clients, instead of having to go through an old documentation site, they can instead interact with a natural language. And so, we've a little technical bits that we do from document ingestion to making sure that there's always a citation. But ultimately we've found that that's, that's kind of the best easiest use case for clients to ease into this process. And we've, we've seen a lot of really great results with it. So, when I think of AI, obviously, I think of more how things are autonomous, what you're talking about really is something that a person was doing, querying clients, issues or problems, and you've trained this specific agent to just answer those questions that, maybe people have, were answering or is it more of a documentation search or is a combination of all those things? Because really all comes down to is creating efficiencies and getting whatever friction is there in time consumption out of the process? Yeah, it's a little bit of both. We probably first started with just the simple searching of documentation. Someone wants to answer, what does this field mean? Or what does this number, how is it calculated? And that's on page 405 of a thousand-page PDF. So that was just an easier way to find those answers. And then the first thing you mentioned around just answering common questions, we found that this as a result of having a language model that was trained on our documentation, it all of a sudden understood our products really well. So, it could, it could answer these, these common questions that clients were coming to us with and even take it a step further where there were new unique questions that it had never seen. I know everybody's familiar with Chat GPT and things, it can start to understand what you ask about and what you might ask about and that's kind of what we saw. We saw that instead of, in our land of collateral risk trade formatting, exposure calculation, while it didn't need to actually know anything about what the client is doing, their actual data, because it just knew our application so well. If someone had happened to ask, oh, how is this calculated? It can then actually apply that to exactly what they were talking about without ever needing access to any of their data, which is really important for us. We don't want to ever, and I'll say this right to the camera, we never want to train on our client information, but there's a heck of a lot you can do once the AI, the language model knows how your products work. Okay. So maybe an important part to point to stress there, it's that it's our doc. I mean, we, we've built a lot of products for quite a long period of time. And so, there's a lot of documentation we've generated to support those products both internally and externally. So, this is using our own documentation that we've created for our products and training a model so that people can interact with that. And I remember when you first started it, and honestly, you talked about Chat GPT, obviously everybody knows what that is. I think it just passed its three-year anniversary. The first time I heard about Chat GPT was my daughter. She was a freshman in college. She had a friend who's in coding. She's oh yeah, she's using this really fun, cool thing to create code. And I was really sceptical. Now all of a sudden, everyone's using it, everybody knows about it, and it's become an agent for a lot of people. I think I just mentioned it to you before this podcast that I'm using it as a personal assistant already. And we're just scratching the surface, which I'm assuming the way we're deploying it at Acadia and Post Trade Solutions, we're still, we're just scratching the surface, I think. So are there any things, what are you excited about? What are the big, are they big problems or other problems that you think or you see AI helping to solve in our world again? I'm caveating this because we are talking specifically about our business. When I say our business, I mean specifically the Acadia business. I do think, going in, we're, recording this in 2025, at the end of 2025, December. I think going into 2026, we're probably going to have a much larger and broader conversation about how AI is deployed across all of Post Trade Solutions. But, what are some of the newer things or medium-term things that you see coming besides this great stuff that we've done around documentation? So we've had about, at this point in time, a thousand interactions with it, we've been rolling it out in tiers over since summer. So large global banks used it. And then some hedge funds, multinationals, some pension funds have now accessed it. So we have a good bit of data for what people are using it for. About 50 unique firms globally. And we're excited because it really just solved the documentation piece, but in the near term, we are working on some pretty exciting things I won't talk about. But in the near term, it's a lot of data representation. So we take in a lot of information from our clients for trade data, for pricing sensitivity calculation and a few other things downstream. It's not always easiest to represent that data in the form it needs to be. And so, I was just describing about having a client or having our language model understand the product and then it being able to extend itself a little bit outside of its realm because it doesn't actually need to know about the client's information. We've seen it be really useful for having clients answer questions that instead of coming to us, they can actually figure out how to map fields, how to solve data representation issues much quicker than we could do it as helping from our client support team. So that's probably that it's a boring thing, but it's a really exciting thing because clients can integrate into our system a lot easier, and solve downstream issues, exposure, collateral issues as a result of the key input issues that our, our system points out. I think you could argue that at its foundation we take in data and send out data. We need to do that nimbly because unfortunately there's not a standardisation of data in every single realm. And certainly, this helps take unstructured content. Helps structure that more into our data model. Equally on the output side of things, it doesn't think a rocket scientist to think about, there's better ways that we could use language models to provide insights and summarise things in a more linguistic style representation of that content and, less of giving raw data to people and having them form their own opinions or find patterns and things of that nature. I mean what strikes me in this whole space, right? Again, I'm not an expert. I'm an expert as far as I can Google things and use Chat GPT. That's being an expert. Well, that's arguable. But I guess what strikes me, and we'll have a little bit of fun with this, but the use the different words it started out with just AI then general intelligence, AGI, and then you start hearing of LLMs, language models. The new word is agentic, as far as I'm concerned. That's new. I just started hearing that. So, what does that mean? I mean, I think, from what I can glean is that agentic AI is kind of like what I mentioned before, your agent, right? It has memory, and it could really dig in and be intuitive. Is that something that we're looking at down the road? I think of what we do as a business one, some of our core businesses, Margin manager, IMEM, resolving disputes. Without going into the special sauce again, we're, everyone kind of does similar things in FinTech. And I think a lot of our competitors and people that we partner with especially are, I've been seeing them going into the space. I'm assuming we are. But what are some of those things that might be exciting that we can talk about here? Yeah, you'd be correct. I would say beta testing a lot of that stuff internally. To answer your first question, agentic would basically imply it's doing multiple steps. It's, it's not just a single, you ask Chat GPT how to answer this thing or how to make a good pizza. It's then going to order the ingredients and going to do multiple things, showing you the videos, how to do it. So that multi-step process is something we're exploring with respect to things like exposure, reconciliation, definitely in those key applications you mentioned. Because we want to empower users that usually are given many different systems, many different things they want to access. Can we boil that down into an area that they don't need to look at a hundred different screens. Can we give them all that information just in one and then be a bit of a feedback loop? So, another term, not to throw another one at you, but reinforcement learning is kind of used in training language models. It's the process of reinforcing what the language model knows. And so, in that process, if you can boil it down, just the summarisation of issues on a given day, that's an easy thing. You dump in a bunch of data in a certain way, and then it's now given the client, Hey, all, here's everything that's going on. Can you then get a little bit of a feedback loop for it to then start to take actions for that person? Can it then go and actually fix the thing for you? And that's what we're really exploring. Can it do repetitive tasks for you start to learn and then be proactive? Is this the same as being autonomous? To some extent, yeah. A little bit. We're not there yet. It's not set it and forget it. Yeah. I want to fast forward to that part, at some point. No, look, I understand that's really, that's really exciting. I mean, on the margin manager side of things, any thoughts on where are we heading? Is there a strategy? Is there something that we're excited about? Or is it more of a multi-step process here? I think it's kind of the data in, data out, I think I mentioned before. If you write the history of what margin manager is, it more or less is a defacto messaging standard for people to communicate amongst different collateral systems, right? And so, what that does require, however, is people to write effectively to our API structure commonly. Now we also take in data in a variety of different forms, right? It's not just always our API, it's fixed messages. It's file based, SFTP of files to us that we transform into our data model. So, if you take all of that and there's other avenues of getting just data into the messaging network, I think we want to be less structured over time and be able to support that. Or even if it's the scenario where we're writing transformations of their client's data model and representation of whatever it is versus how it needs to look within our, in our service. We want that to be a much lighter touch human involvement process. I mean, in theory, you could make that more self-service for clients. And I think equally on the output of data, now that you're running that, and when we mentioned about disputes and stuff, there's only so much content we might have in certain cases, depending upon the workflow that we're referring to. But sometimes it doesn't take the interest statements workflow, the interest statements workflow is, should be institutions submitting in kind of a daily interest accrual over the month based off a balance and an interest rate, so on and so forth. The workflow, the value of the workflow, of course, it gets people to immediately agree or not agree and then show where you might be off. You have a different reference rate on a particular day. You have a different balance on a particular day. It's not rocket science. Right. But that requires someone to go look at it, And researching what rate I am, what rate they're using. That's right. Do the math. And we can describe that back. I mean, I think instead of having a human have to figure that out, there's no reason why we can't describe it back. Now in talking to some clients about these types of ideas, the interesting points raised the margin manager network's very much the glue in the middle. Most institutions are using their own collateral management system, whatever that is, whether it could be our collateral management system too much you, but they're using a collateral management system to connect into that gateway. And that's what they live in. So, if we produce outputs of data, then you still get back into the, well, what's the structure of that data? So that can be read into something else. And so, I think that becomes a little bit challenging, presumably, but there's no reason why we can't use it to help pattern data, explain data, produce some rationale, some insights, but it's like, I don't know if anyone wants to log in to something else to see that. It's then it's how do we get that injected into, into something and work in the background? I think that's, that's the one thing that's exciting to me is, errors that are repetitive and known things that people are doing, humans are doing over and over again. And I know there's a lot of, especially on the other side, our clients are going into root cause analysis and trying to figure out why is my data this way and trying to fix it. If this AI technology or any type of AI technology can speed that up at some point, which it seems that's where we're headed, where it learns it's intuitive, it could figure things out and just say, okay, this problem will not exist anymore. And I think that that's the right way to think of it. The margin call itself. We've been talking about Snap forever. Right. That's, that's a huge problem. Still a problem. Yeah. Right. But the margin call itself is such an aggregated number, right? And you think even for one institution to do their own calculation of a margin requirement, often times they could have multiple trading or risk systems. All of those, the risk calculations that have to be performed overnight. All of that's kind of getting fed into a collateral management system along with positions and instrument reference data and pricing on securities and understanding parts of the legal. So, all of that is a choke point and, I think there's no reason why AI can't help pattern the repetitive nature of you. You continually fail or have a dispute or whatever as a result of something. I think the difficulty really ends up being the starting point is kind of the middle ground, the messaging part of it, but it's how do you then understand additional information that sits in other hubs and supplement that. And provide context and things of that nature. Jake, what are some of the things that, I think I asked you before, but I want to pinpoint what are you excited about going forward? What are some of the, what are some of the things that you see that you're going to be excited about in next year and in the year after, using this technology? Is there anything emerging that you see that's new? Are there any new adjectives to explain different kinds of AI that I, I haven't heard of yet? Is there anything that comes to mind? Yeah, I think there's probably two things. One is the, is the squashing of repetitive tasks. At least in the initial margin space first. We are the central point for a lot of really great information. And there's no reason we shouldn't be able to pinpoint at the lowest level. Because the margin call is high, but we have all the low-level detail to know what should be the issue or what's the, what's the likely cause. And I think the other thing is around unifying, and sidestepping issues of the past of legacy technology, right? I mean, you were just talking about a margin call amount or even margin manager encompassing very various different event types of substitutions interest. And instead of having to, to, as, from a developer point of view, instead of having to build all these different screens, unify them, yes, they might go to a collateral system. No, they might not come into our UI. AI is almost a way to bring all that together in a simpler way than having to think about all those things as being separate and unifying them. I'll give you another acronym. And MCP is a certain protocol that people are using now to attach a language model into people's APIs. So, you can think of that LSEG has publicly done this with a few companies for their various D&A related businesses. But if we can think about that in the markets, in post trade space, creating a unified layer above all of our products, we can start to bring all this together a lot quicker than having to build separate things or luring someone away from their collateral system. It can just be an appendage that naturally could be anywhere. Wow. We should probably mention all of these things are fairly complicated engineering things that are a lot of people are putting a lot of time and effort in. And I think that, we do not take lightly as a company being a trusted partner to the large financial or the financial institutions on the planet. And I think we do this very thoughtfully. We do it with a lot of people, smart people thinking through how this should be engineered, how it should work. Yes, we're more kind of probably towards the start of the journey than the end of the journey, but I think we will do this in a way that is beneficial to us, beneficial to our clients, and done in a very trusted way. Yeah. I mean, I think, it's exciting times for sure. And you mentioning across the enterprise of LSEG how it's all going to bring a lot of the stuff together. And I think, as we talk about our journey in post-trade solutions, 2026 is very much one post-trade solutions. If AI is going to help harness that power and put it all together and help us do that, I think that's going to be a huge win outside of all the specific fun things that we're looking at too. So, yeah, that's great. You’ve got to keep smart kids like Jake entertained here in terms of figuring out new things. Yeah, just put me in a little box, give me some toys to play with. See how far we can take it. Sounds great. It sounds great to me. All right. So, I mean, we can probably spend more time on AI. We can come back to a few things, but the other thing that I mentioned before around tokenization is another, I call these pillars of, what's, hot topics these days, topical things that are really affecting, in a good way, I think. But obviously there's caveats to everything. How something like tokenisation is going to change our industry, how it's going to change our business, the way we do business, how our clients are going to change. I know it sounds like one thing, right? But it's not one thing, right? It could be quite a few things. I guess first before we dive into the specifics, can you just give us a generalisation of what, when we say tokenisation, what do we mean? Yeah. The misnomer of course is people jump to crypto, right? And so distributed ledger technology, obviously, DLT supports the underpinning technology, supports the crypto markets. In the collateral management space, people need to remember that people only take, typically very liquid forms of collateral that are very deep markets that are highly traded, that are transparent, so on and so forth. What does that mean typically? It means there's an awful lot of cash in the major currencies transferred for collateral purposes as well as securities. And those securities are usually government securities. Of course, they do branch out into mortgage-backed securities and corporates and equities, so on and so forth. But if you were to put them in order, you would have government securities, US treasuries being a big one. So what tokenisation is really referring to in that context, it's more nothing that is typically transferred for collateral management purposes today is natively a digital asset. It's not something, it's not natively issued in terms of it being something that's one, a distributed ledger, technology or platform. So, tokenisation by nature is referring to saying that, if we took a real-world example, if I took a US treasury and segregated that, so it's somehow protected. I don't own it, I don't put it, it's not kept in my account where I control it, but put it into a second account for a service and then create a token. At that moment in time, you have a digital twin if you will, of what that asset is. Now, once we've done that, the ledger can effectively maintain who owns that asset initially. I would own that asset. Because I've tokenised that asset and that token's been created and issued to me. If I wanted to transfer something to Jake, it could be a margin call that he issues to me and I agree to send to him, I can then pipe that if we mutually agree to transfer that token, we could pipe that. And this is where the extension of margin manager, where we see it or where our role could very well be is that today people mutually agree to transfer collateral on our platform. Well, same thing would happen. There's probably slight changes in some of the data model to describe it's a tokenized asset of some variety, but once we do that, I think there's some number of these ledgers that will exist. We don't know how many, of course, but there's certainly going be more than one. And we could get from mutual agreement, which we call a pledge accept in our workflow today, to transfer that as an instruction to change ownership. And that can instantaneously happen as a result of doing that. You read that back to the institutions and, so you get from margin call mutual agreement to transfer something and settlement immediately thereafter. That's the promise of it. The difficulty of it is a little bit around some of the legal context. Collateral's all about how do I ensure that there's a legal framework in whatever jurisdiction we're referring to, and there's opinions and so on and so forth, that lead me to believe that if there was a default, that I can get ownership or perfect interest in that collateral, as I would need to, right. Immediately. Yeah, yeah, sure. As part of a legal proceeding. Right. There has to be people that are all in agreement. We won't get into smart contracts yet. No, no. I think smart contracts are a slightly different story there. But in this realm, whatever we're doing is we're all legally entrusting a ledger to say who owns an asset at any moment in time. So, if a default were to happen, everyone needs to feel comfortable that, that moment in time is struck. That asset shouldn't be able to be transferred any further. Right. And ultimately, I should have a claim on that asset. And there should be enough comfortability in terms of the, the legal framework or jurisdiction that everyone's operating under to be able to say, okay, yes, you own that asset and you'll be able to get that asset because you, the token itself isn't the asset, it's the token digital twin or equivalent of the asset. I actually need to get the underlying asset hat sits in a seg account at the end of the day. So that's tokenization. Now, some people will even go as further to say, to get confusing here, that they don't want to think of it as a token. They want to think of it as a digital way to transfer legal ownership of something. Which is, I don’t know, maybe splitting hairs a little bit, right? But I think tokens are an easier way to think of what that is. And you have stable coins of course and other things, and there's an implicit trust that needs to be built as well in the market in terms of, that whatever is sitting there that the thing that isn't the natively digital thing. So, if we're all agreeing to transfer on a ledger, some, digital message effectively, that I entrust that whatever is the service is keeping a one for one. If that's the expectation, if there's one, $1 for every dollar equivalent of token or stablecoin or whatever it is that, those assets need to be there. Well, Bitcoin, yeah. Well, and then crypto of course is the angle of natively digital assets. We won't go there. It's interesting, as you were talking, I started thinking about the intersection between AI since we're talking about AI and tokenisation. When you say again, in a perfect tokenised digital asset world where you are on ledger, everything is on ledger, ownership is immutable, exactly who owns it. But then, in an event of default where the rubber meets the road, pledging collateral is obviously important for risk management. And we all know, anyone who does this for a living knows why that's important. But also on the flip side of it, when there is a default, there's going to be a mechanism somewhere where something or someone has to make that determination, right? It's an illegal document. This, these are provisions of, of what a default looks like. And then I would have to get that collateral back or transferred back to me. We lived through '08 right? I remember being on, on the phone, I'm not going to mention the firm's name, trying to get collateral back. And this was the days before even a margin manager was around, and it was nothing was even electronic, let alone on a deal, on a distributed ledger technology platform. In a perfect world, I would think that, and again, maybe I'm throwing AI into this as well, that something, or some, someone or some system is going to actually make that determination instantaneously and move it over. Now maybe I'm getting science fiction, but it could be a rules based approach plus a little bit, right? Maybe, maybe it's a rules based approach plus, well, if you go down the smart contract route, which I do think is a much more complicated scenario Because that to me is, that's the real value. It's if I don't have to wait, and I know for sure, you said, the word trust here is important. Yeah. I know exactly where my collateral is at every moment of the journey, whether it's substitutions, maybe that's a separate ledger. But when the rubber meets the road, when it really comes to an issue where I need to be protected and I want my collateral back, because you're in default, no offense, I'm in default. Maybe I'm giving it to you. That I know it's definitely coming back to me and it's coming back to me quick. Of course, otherwise then there's stick with the old version. There's the what if, I mean, right now as we sit here today, it still would be common practice for people to have to issue a notice of default. And that's oftentimes a physical couriered type of letter to someone, there's people that are actually trying to digitize that type of framework. So you have to take the baby steps or you have to get everything into a more digital framework. I think in terms of what is an actual default, it's an interesting question. Today, there's somewhere in the magnitude of 3 to 4% of settlements that are for collateral purposes and for the collateral transfer are not made. And by no means are all of those deemed defaults. Because what's happening is some form of operational failure. Mr. window, settlement system failure. Technically you're in default, but you're not really fault. That's right. So, there is a need certainly for that to be a much tighter understanding. So, if you really go back to '08, the firms that did best, so plenty of firms failed. But plenty of firms ended up surviving, whether it be by government support or not. But in those scenarios, the firms that were least impacted by the major defaults were typically the ones that were the quickest to pull the trigger to default to the institution that defaulted. So, there's a speed of which that is commercially beneficial for anyone that's in that situation. So you think of, do we want that to be the practice? I mean, that's a very, well, I think these are a very odd incentive structure These are things that have to be considered right? Sure, sure. As this is rolling out, I mean, there's a lot of things we're not going to talk about every single thing, and I don't want to harp on the default scenario, but I feel not enough people are, I think everyone's looking at the liquidity improvements, the speed in which, the atomic speed in which you can move collateral. But I think, I think that's necessary because it's, again, it's where are the problems today? You don't actually have thousands of defaults on a regular basis, but you have thousands of settlement failures on a regular basis. Right? True. And so those settlement failures have real world Maybe I'm just a negative person. Well, I mean, It's the very extreme. I have war wounds like you do. So, I always think about the worst case scenario. But I do think the role of tokenisation, obviously, is to try to speed up the system to allow people to be more optimal in terms of, if I wanted to deploy a more, oftentimes I'm really trying to optimize the collateral that I have. I'm bound by the constraint of operational capability, how quickly can I do what this optimisation engine's telling me to do? And you're going to be constrained by the way settlement works today. So, tokenization offers a future which allows people to hopefully reduce settlement fails, reduce the cost of those fails, improve liquidity, improve how much you can optimize the book of collateral that you're managing on a particular point in time. All of that has value. The downside of it is, at least at the onset, there's not an obvious takeout of some existing system. So you're still, custodians still exist. So, your legacy infrastructure will stay in, in theory. And you're layering on now a new aspect of how you might go from legacy infrastructure to, morph that into a more digital infrastructure. And so I think what you're also seeing, of course, is certain governments are very much pursuing natively issued debt, government bonds, but they're digital assets themselves. That's a slightly different paradigm shift, right? Because at that moment, you now need to build a more, we commonly hear a central security depository, CSD, is the thing that exists for a particular market. Now you need to build a digital securities deposit, in some way, a DSD, right So you need something that's the equivalent of that, but it's operating in a more digital and DLT manner. That, and the fact that governments are pursuing that, that’s a completely different paradigm shift. I think even from primary issuance in the primary market to secondary trading to then collateral management. All of that needs to be done in a digital framework. And so that's less tokenisation, but it's still the same underpinning technology. So I think the smart people that are out here trying to solve these problems aren't thinking of it as a, today we have to settle in this way, and then you have tokenization that is slightly different, and then you might have digital assets that are something different. I think you can blend all of that together and think of it as a continuum of, we are where we are today and it's functional and it works. And then, there were things that will unlock whether they be money market funds and token tokenising them initially, just by the nature of I have excess cash, I wanna sweep that into a money market fund. But once I do that, I can't really do much with it. I get a return on my cash, but I'm not able to deploy that and send it to Jake in terms of a margin requirement, even though it implicitly has value. If I tokenise it, of course, I could think about doing that in a much more nimble way. So, I think the industry's pursuing things as it always does. How do you take some form of binding constraint, improve upon it, make it more efficient, squeeze a little bit more financial juice out of what we're doing. And I think tokenisation certainly has a place in trying to solve some of those problems. Any thoughts, Jake? Yeah, a few. I like the fact that from a technological point of view, margin manager in this approach would be that digital ledger, that is the true truth that everybody has to play off against. And the reason I like that is because early DLT technology did it in kind of a more distributed approach where, you had different nodes in a network. Everybody had to agree on that. That's commonly how a lot of digital ledgers work, where chains cannot get added to the blockchain and all this stuff. But you run the risk if you distribute it, let's say in our scenario, all three of us are trying to move a piece of collateral. All three of us would have to have our systems up and running for any of it to be approved. But if, Will, you have a operational failure in your bank's technology fails today, now all of a sudden that piece of collateral cannot get moved because all of us don't agree. So it doesn't really work. But when you have Margin Manager as the source, I really that. It's a trusted third party and, we're obviously a little biased, but it's up just a little all the time and you don't have to worry about someone lagging the system behind. I'd take another, you're right, the duration of a day is an interesting thing on its own, right? We're a global business and you see the split when you have kind of firms in Asia facing firms in the Americas and the time zone problems and shifts that happen. Folding into that equation is a bunch of exchanges on the planet for a variety of commercial reasons, wanting to extend the days of which you can execute and clear trading. We're entering this world where you can build up risk in a more global time period. But the settlement rails that exist for whatever particular type of currency, still have a business day to them. So, for us treasuries, you can't settle them really until before 3 PM, for US dollar cash before 6 PM technically, probably even a little bit before then to fund correctly. So you have real cutoffs that exist. And I think if the world does stretch out the trading day in a more globalized way, which does by all counts seem to be happening, the aspect of intraday margining has to start happening. And thankfully, Margin Manager is an event-based infrastructure. It's not batching anything. Right. It's just firing off back. Today, literally today. And, not that there's a huge amount of use to this, but today firms can issue intraday margin requirements if they have legal rights to do so and everything else. So you can see the world happening where there will be a need to calculate risk much more dynamically because the trading day's not going to be so boxed in as it is today. As a result of that, you need to be able to have some settlement mechanism that isn't so constrained by the time periods that it is. Again, the digital networks might be able to solve those problems, and you can easily think of it. Well, of course, AI has a role, again, in terms of data in, data out, understanding what's happening. If I wanted to optimise risk in terms of how can you get some insights in something, some tool that's giving you those insights, agentic solutions to be able to pick more of those steps and unfold them. So, I think we sit in the middle again as a mix of a trusted partner plus some products of which people use as more of a network solution. I think we just see this natural evolution. I think we're prepared for that, obviously, and it is where we're trying to go. Our jobs effectively, all of us here on this table, it's read the room, right. Read the industry in terms of where are we going and what do our clients need us to do and then try to flex in that direction. And much of what we're talking about today is pontificating. But there is an element of, we talk to clients about these things and try to get from them what are they thinking and how are they going to use this? And we see that, obviously shifting how the industry works. Yeah. I mean, look, we're always client focused, and I know we're product led, but we're also always listening to our clients completely and making sure that we're responding to them. We could, we probably talk about this forever. I know from on the tokenization side of things, there is a, from an industry side of things, there's a lot of concerns, but there's a lot, a lot of positive responses, especially going back to the regulators. I just read ISDA's response to the CFTC. They point out regulatory guardrails, making sure that any of this new technology that comes out doesn't surpass that or stays within those guidelines. So there's a lot, there's so much more to unpack here. To ask this question, and I'm going to ask both of you in terms of either one, whether it's AI tokenisation, those two topics that we're talking about today. If you had a crystal ball, what do you think you're going to see, or what would you hope to see in either tokenisation AI or maybe the intersection between the two in the next year or two? What do you want to see? What are you hoping to see? I would hope to see collateral and port rec teams basically supercharged. Not that their teams need to get any smaller, but they're able to do more with kind of the existing people they have on that team and they can focus on other things besides maybe daily grunt work of making sure someone has done some very mundane task to actually fixing the things that might have been outstanding for years or maybe even a decade at this point that our system could tell you based on historicals or things that. So, I would say it’s supercharging teams that maybe don't get a lot of love too, you don't get a lot of new technology. That's great. Jake, we didn't prepare, by the way, anyone listen to this, I didn't prepare Jake for that question. Off the cuff. And you got a little bit of time to think. I might, I might pile on top of what he said. I mean, I think of this as if we stick in the nucleus of collateral management, that when things aren't working, you have either a dispute or you have a fail. And as a result of what you really want your collateral management function doing is understanding the risk associated to these things. Is the fail an actual liquidity event that could be a default? Is a dispute something that I really need to care about? Unfortunately, those things are exceptionally clouded today by operational noise and ops people spend an awful lot of their time getting a process to run. Just trying to get to a point without really kind of understanding what's driving problems. And so I think we do want the supercharging, you're right. I always equate it to being, you want your ops people to look more like risk managers. And I think they can't today because they're having to focus on the operational stuff. Just making something go from step one to step two to step three to step four. And they're clouded by all of the things that can go wrong in today's technical and operational processes. And if we can clean that up and use these technologies to clean it up, I think you can start to get to the point where the exceptions become real things that you need to care about, right? That could very well be a much easier way to isolate that there is some form of credit risk event that I need to care about. And that should be the purpose of collateral management. Well, I think you guys would come back next year around this time next year to talk about these two topics again. Sure. We're going to play this one back and see where we are in a year from now. I'm really excited by a lot of this stuff. If I had a crystal ball, I would to see if the evolution to more autonomous AI, the agentic AI, and seeing how smart it becomes, where it's solving some of these problems for our clients that we're talking about today. Keep that genie in a bottle. Yeah, well. I'm just kidding. I get it. I get it. But at the same time, I think it's inevitable if I'm using my own opinion but I do think there is a level of, it could be helpful, and you said, maybe humans who are doing some of these things could pivot to more value added functions where you're more, like you said, looking at risk, liquidity, capital, things that are really hitting your P&L rather than getting your head down into solving mundane, everyday problems that that persist constantly. So hopefully maybe we'll see that. So, we'll come back in about a year from now. We'll play this back and then we'll talk about it. And then we'll maybe publish them both together again in 2026 so that people can see how wrong we were. No, hopefully we're a little bit right. Gentlemen, thank you so much for, for joining us today, and like I said, we'll be back and do this again. Thank you again for joining us today. I hope you found this one extremely interesting. I know I did. You can find our Ahead of the Curve podcast on all your favourite streaming services: Spotify, YouTube, and on lseg.com. Thank you for joining us, and we'll see you again soon.

Other Episodes

Episode 17

May 02, 2023 00:12:31
Episode Cover

Understanding Risk & Capital Management for crypto derivatives firms

In Episode 17 of Ahead of the Curve, Scott Sobolewski, Partner at Acadia’s Quantitative Expert Services team discusses why more and more crypto currency...

Listen

Episode 23

February 05, 2024 00:27:03
Episode Cover

Upcoming Regulation: A 2024 Outlook

John Pucciarelli, Head of Industry and Regulatory Strategy and Stuart Smith, Co-Head of Business Development at Acadia sit down to discuss the regulatory outlook...

Listen

Episode 15

December 13, 2022 00:24:54
Episode Cover

Removing the Risk: Why a centralized risk calculation service has been adopted by the derivatives industry

In this episode we speak to Acadia’s Donal Gallagher Head of Quant Services, Stuart Smith, Co-head of Business Development for Risk & Data and...

Listen