Chanice Henry: Artificial intelligence, machine learning and large language models bring the promise of a competitive edge for financial services firms in today's volatile macroeconomic climate. But in the race to unlock this opportunity, organisations must work to overcome system limitations and avoid a growing wealth of misinformation.
So how can financial services organisations derive new value from these emerging technologies without compromising on quality, integrity, and trust?
I'm Chanice Henry, senior editor at FT Longitude, and joining me to discuss this is Andrew Chin, Head of Investment Solutions and Sciences at AllianceBernstein. Thanks Andrew for joining me today.
Andrew Chin: Hi Chanice, it’s great to be here. Thank you for inviting me.
Chanice Henry: Andrew, you've got a wealth of experience across data science and risk management in financial services. Could you start by painting a picture of how, in your opinion, the sector has changed in recent years and in particular the impact that emerging technologies, so AI or artificial intelligence, machine learning and large language models, are having?
Andrew Chin: So, I would say I started my journey about seven years ago when I first created our data science efforts here within AllianceBernstein. What's really changed over the last four or five years is that we have a lot more data. So we have a lot more data that we're dealing with. There's the higher value, more variety, it's coming at us much faster on my iPhone. So what that means is that there's a lot more information that I need to consider. And then on top of that, the tools by which we can use to synthesise the data, to understand the data, to analyse the data, has really exploded. Large language models, AI, machine learning, as you've said. And so the opportunity to analyse this data much more quickly in a much more efficient way, that's also changed. So I would say as I think about the industry, it's really those two aspects which has really impacted our organisation.
Chanice Henry: Yeah, absolutely. There's more variety in data than we've ever seen before, the volume is growing exponentially, and that can bring in difficulties in that, what do we trust? What information points are reliable enough, fit for purpose for a high stakes decision? Or what should just be taken as contextual and needs to be verified? So this idea of misinformation is a conversation I'm seeing coming up more and more. Fake news, what can you trust? And of course with these, a sophisticated system comes in such as AI or LLMs or ML. It's great that they can crunch massive data, but you need to make sure the data being fed in is trustworthy and reliable. Have you in your experimentations figured out any dos and don'ts with safeguarding the trustworthiness of the outputs of these technologies based on the input of the data that's going in?
Andrew Chin: Yes, you're certainly right. The data quality, that is the most important thing. People say ‘garbage in, garbage out’. But it's a lot more important today, because when I think about the small amount of data we had before, you can easily, visually see the outliers in the data, right? So that's how you know, "Oh, actually that's probably bad data and now I can think about whether I want to use that dataset or not." With the volume of data we have today, it's impossible for me to visually see or visually understand whether the data is an outlier or misinformation. So we have to think of other ways to ensure that the data quality is high. So there are a variety of different things that we can do. One is obviously work with a vendor, work with a source that you can trust.
Second thing is, even as you have this data, you will need to cleanse it, you will need to massage it in a way that hopefully works for you. So especially when using large language models, there are lots of things that you need to do to mitigate what are called hallucinations in the data, the tendency for these models to potentially answer, using irrelevant or actually just incorrect data.
What we do on our side is we first, we finetune these models. Usually these large language models are not fit for purpose for what we want to do in our specific domain. We're in finance. I may have a specific need in equities or in fixed income. So finetuning the models with examples to work in your specific sector is really important.
Secondly, is prompting. Prompt engineering has become a whole new field. But it's important that you ask the question correctly, just like the way you're asking me questions to have me explain something better. It's the same thing. You have to ask good questions to prompt a good response.
I would say the third thing that we really try to do is we really still try to hire subject matter experts. Just because you have more powerful tools doesn't mean anybody, a dummy like me, can potentially use these models. If you're using these models specifically for mortgage-backed securities, as an example, you still need a subject matter expert in that particular domain so that they can ask the appropriate questions and when they get an answer, ensure it's the right quality and it's in the domain that you're looking for. So you can't just have anybody operate these tools. So I would say that domain experts, subject matter experts, will remain important in this industry.
In our organisation, we ensure that it's the humans, it's the employees that have final accountability. And while these tools are helpful, you know, we want to make sure that the ownership still lies with the individuals.
Chanice Henry: There's a lot of excitement about how we can be using these technologies and people are going from sandbox experimentations and now trying to roll things out so they can see some returns and some wins.
And of course the question is, well, what is the low hanging fruit of these powerful technologies? I think to dig into this, it would be good if we broke this down into a few steps, maybe into front, middle and back office within financial services.
So if we start with front office in financial services, what are the quick wins that AI, machine learning and large language models can bring to the performance and decision of the front of office employees in financial services? Again, any dos and don’t to make sure you’re not doing more harm than good.
Andrew Chin: I would say broadly speaking, the lowest hanging fruit is around natural language processing. And the reason that's the case is because in our industry we work with so many documents, whether in the front office, back office, it's always documents, always text. So these natural language tools are really perfect for task. So I would say broadly speaking, that's where the opportunities are.
Within natural language processing or AI, I would say that these tools are typically used for a variety of different tasks. One is interpretation, trying to understand something. Secondly, it's around summarisation. Can I summarise this article very quickly? Third is chatbots. Fourth is content creation. Can I create content based on what I know? And then I would say the last one is prediction. So I would say those are the broad categories by which these large language models are typically being used.
But the main benefit that I see today on the investing side is really around making the analysts more efficient. Instead of listening to earnings calls from 50 companies today, I can synthesise it very quickly. Instead of trying to see what all the corporate filing from my energy sector said, I can just synthesise it and say, "Well, on a macro level these are what all the energy companies are saying." So a lot more efficient from that perspective.
And then maybe an example from the operational side. So on the operational side, we're reading lots of documents usually for compliance reasons. "Can I invest in this security?", "Are there restrictions in this security that it makes it harder for me to invest?" Now you can sift through the 200-page document very quickly and operationally it makes the process a lot easier. So we've certainly seen gains in both of those areas. On the operational side, we've easily seen productivity gains of approximately 50% or so, really helping our operational workflows tremendously.
Chanice Henry: I'd like to dig in a little bit more into the operational side because I feel that the front of office side of things tends to get a lot of limelight. But I think there's still a lot of gains to be made on the operational side, so I'd like to pause there a little more. I'd be interested to know more about the risk management side of things. Also looking at liquidity management as well, where AI can be utilised there to fast-track decision-making, reduce gaps in understanding. Any observations that you've had in those areas that you'd like to share?
Andrew Chin: Certainly. We look to news organisations, like the FT, to cover stories all around the world. The difficulty for us is, how do you read all these news sources all around the world in an efficient way? So from a risk monitoring perspective, one of the things that we try to do is we try to highlight whether there are any potential risks surrounding any of our companies which are coming up all around the world. So we are using large language models to sift through all the news articles which impact the holdings that we have, which impact the companies that we hold. And one example specifically is that we look for ESG-related issues. Slave labour for example, is a big issue for us, and we want to ensure that we understand potential instances of slave labour around the world and how they may impact companies that we hold.
So these filters act as really quick ways to get to the things that you're looking for, get to the themes that you're looking for. That's just around ESG, but you can imagine this being relevant for lots of different things. For risk purposes, for compliance purposes, for operational issues, for new opportunities. You may want to invest in a certain market, you might want to invest in a new technology, so you can imagine these tools being very helpful for that.
Chanice Henry: There's a lot of discussion going on at the moment around how to safely but productively scale up the adoption of these technologies, how they're being rolled out and used, what guardrails you are embedding, but also not to a point where you stifle too much productivity. Any insights that you've seen on your side around what works in those areas and what you see could have good potential moving forward?
Andrew Chin: I would say that it starts with having accountability, ownership for the decisions. So you have to say at the very top, "I am responsible for these decisions.", k Knowing that you're responsible now, what does that imply? For me, what that usually implies is, "Okay, I'm going to customise these models and make them fit for purpose for the task that I have at hand." So what I suspect going forward is even though we have some core models, ChatGPT, Gemini and things like that, I think each of us, we're going to customise these models and make them fit for purpose for very particular things. For example, I may say, "I have a large language model just to summarise earnings transcripts.", "I have a large language model just to find ESG risks in news articles." So what that means is those models can now be smaller and easier for me to use -. Tthat means they run faster, less expensive, things like that. - bBut also more importantly, I can now finetune these models for these specific use cases and make sure that they work in that area.
I think it's very difficult for any one of us to really create a mega model like a GPT-based model, but we can certainly use some of these tools to make them fit for purpose for these smaller problems that we have. That's the commercial side where we use some of these large models.
And then finally, as I think about organisations and how they may prepare for the future, one of the things that they need to do is really prepare the employees and train the employees. And I'm not just talking about the data scientists. Of course you have those. But employees who are making investments, employees who are doing the operational tasks, you want to make sure that they know how to use these tools effectively in their domain.
Chanice Henry: Yeah, absolutely. As we close, we've chatted about a lot of topics today, but if you could boil it down to one golden rule that you'd share with other business leaders around embracing these sophisticated technologies, what would that be?
Andrew Chin: I guess what I would say is, there is probably more hype right now than warranted, but the long-term potential is there. As we look at financial services over the coming years, we probably will be disappointed by what's going to happen in the next couple of years, but I think we'll be awed by what we will see in the next 10 years. And I think the rationale for that is because, as I said, we're so excited about what GPT has, can potentially do, and so we're all like, "Okay, okay, let's see what's next." But we're going to be disappointed because these tools need that calibration, need the finetuning, need the attention and care, need the subject matter experts. So some of us will lose attention and that's disappointing. But come 10 years, people like our firm, you hopefully will make a lot more progress and people will be like, "Wow, you've made that much progress with a 10-year period."
So I would say don't be too disappointed by what you see in the next few years, because there's a lot of promise and potential in what we're trying to do here.
Chanice Henry: Yeah, absolutely right. And that's a natural part of an adoption curve with a new concept or a new tech. You have the hype that skyrockets and everybody gets a real fever pitch, and then you start to see things taper down and some people fall away. But then it comes back up and there's then a plateau of what's the real value here, which is shifted through the noise. What do we actually have? So you're quite right, persevere through that point that is inevitably going to come when the dust starts to settle, to hold firm and keep looking for that value because it is there, I think you’re spot on with that.
Andrew Chin: You said it really well, Chanice, Thank you for that.
Chanice Henry: Well, thanks so much for your time today, Andrew. It's been great for us to chat. It's been really insightful for us to go through these topics today.
Andrew Chin: Thank you for the opportunity.
This interview is part of a research study, produced by FT Longitude, in partnership with London Stock Exchange Group.