David Schwimmer
- In artificial intelligence (AI), the value of data isn’t just in its volume but in its integrity and trustworthiness – poor data leads to unreliable results and AI risks such as hallucinations and bias.
- Data transparency, security, and integrity—such as “watermarking” for financial data—are critical for compliance, customer confidence, and effective AI deployment.
- Industry-wide coordination, standardised definitions of “data trust”, and interoperable regulations are essential to fostering reliable AI systems and scaling global financial innovation.
More than a century ago, reels of ticker tape were considered the cutting-edge of real-time data technology. Today, digital data serves as the lifeblood of the global and economic financial system. However, without pinpoint accuracy and trust in that data, we risk detrimental consequences for the whole economy.
As a global data and analytics provider, LSEG (London Stock Exchange Group) delivers around 300 billion data messages to customers across 190 markets daily, including 7.3 million price updates per second.
We are also seeing how AI is transforming finance. It’s supercharging productivity internally and in our customers’ products, enhancing financial workflows by boosting efficiency, enabling more informed decisions, and strengthening the customer experience.
As the financial services sector continues to explore the possibilities of AI, there is an enormous appetite for data. This continues to grow: customer demand for our data has risen by around 40% per year since 2019.
But without the right data, even the best algorithms can deliver mediocre, or worse, misinformed results. Poor quality data increases the risk of AI hallucinations, model drift and unintended bias. The growing complexity of contracts and rights management in this field creates inherent challenges in avoiding licensing or contractual breaches.
Building on data integrity and digital rights
There are great new opportunities for processing large unstructured datasets through generative artificial intelligence (GenAI) models, but their worth is limited without trustworthy and licensed data. Data in GenAI isn’t just a quantity game; it’s a quality game.
Many businesses are critically considering how to embrace AI opportunities with high-quality data. At LSEG, we’ve developed a multi-layered strategy that may help guide others in the financial services industry.
The first layer is ensuring data integrity and relevance, which are critical requirements in large language models (LLMs). “GPT-ready” datasets – curated and validated by trusted data providers – are in high demand, and we expect that demand will grow as more businesses explore GenAI’s uses.
High-integrity data acts as a security net when working with LLMs and other AI applications.
The second layer is digital rights management. Customers expect solutions that verify which sources can or cannot be used in LLMs, govern responsible AI policies, protect against IP infringement and differentiate usage rights.
Trust and transparency in financial data
These layers are underpinned by “data trust,” an approach to data that is built on the foundation of information transparency, security, and integrity.
When data leads to big decisions, customers need peace of mind to track where data is coming from and ensure that it is secure, reliable and able to meet regulatory and compliance standards. Put simply, it’s “watermarking” for financial data.
All financial services companies must raise the bar on the calibre of their data.
To increase trust in data across the industry, we need greater standardisation, coordination, and a stable regulatory environment, underpinned by clear principles on AI’s responsible and ethical use.
The more standardised the industry definition of data trust, the easier it will be to ensure the flow of high-quality data. If the core principles of transparency, security and integrity of information are applied to the standard of data, we will be able to foster real-time, pinpoint accuracy across the sector.
For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that’s going into it.
Laying the ethical groundwork for innovation
The industry should aim for the highest level of transparency so that customers can see what a dataset contains, who owns it, and how it is licensed for use.
Regulations such as the European Union’s AI Act and the Digital Operational Resilience Act introduce safeguards, clear accountability and a focus on governance and preparedness in financial services.
Voluntary guidance, including the National Institute of Standards and Technology’s AI Risk Management Framework in the United States, can also help organizations measure and manage risks to AI systems and data.
It’s clear these regulations serve as good starting points for how the financial sector should continue to develop safe and fair practices of AI. They have inspired our own Responsible AI Principles at LSEG.
Moving forward, policymakers must recognize the need for high-quality data as we develop the AI-enabled tools of the future.
We support the use of internationally agreed-upon definitions relevant to AI and data. We also need more rigorous parameters for managing intellectual property and digital rights.
The path to global AI regulation
At the same time, regulatory requirements for technology must be more interoperable. The more specific the rules, the more difficult it is for global companies to scale up quickly.
When companies need to make business decisions in different jurisdictions, this can impact everything from the location of a data centre to the choice of a cloud provider.
As AI technology develops, policymakers should ensure legislation is flexible enough to align with other jurisdictions while remaining relevant for upcoming AI use cases.
None of this will be easy, but businesses in the financial and tech sectors, regulators, and consumers can all contribute to this conversation. We will need varying expertise and understanding as we embrace the technology that will alter our lives.
For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that’s going into it.
First published by the World Economic Forum on 20 January 2025.
Legal Disclaimer
Republication or redistribution of LSE Group content is prohibited without our prior written consent.
The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.
Copyright © 2024 London Stock Exchange Group. All rights reserved.
The content of this publication is provided by London Stock Exchange Group plc, its applicable group undertakings and/or its affiliates or licensors (the “LSE Group” or “We”) exclusively.
Neither We nor our affiliates guarantee the accuracy of or endorse the views or opinions given by any third party content provider, advertiser, sponsor or other user. We may link to, reference, or promote websites, applications and/or services from third parties. You agree that We are not responsible for, and do not control such non-LSE Group websites, applications or services.
The content of this publication is for informational purposes only. All information and data contained in this publication is obtained by LSE Group from sources believed by it to be accurate and reliable. Because of the possibility of human and mechanical error as well as other factors, however, such information and data are provided "as is" without warranty of any kind. You understand and agree that this publication does not, and does not seek to, constitute advice of any nature. You may not rely upon the content of this document under any circumstances and should seek your own independent legal, tax or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon. You expressly agree that your use of the publication and its content is at your sole risk.
To the fullest extent permitted by applicable law, LSE Group, expressly disclaims any representation or warranties, express or implied, including, without limitation, any representations or warranties of performance, merchantability, fitness for a particular purpose, accuracy, completeness, reliability and non-infringement. LSE Group, its subsidiaries, its affiliates and their respective shareholders, directors, officers employees, agents, advertisers, content providers and licensors (collectively referred to as the “LSE Group Parties”) disclaim all responsibility for any loss, liability or damage of any kind resulting from or related to access, use or the unavailability of the publication (or any part of it); and none of the LSE Group Parties will be liable (jointly or severally) to you for any direct, indirect, consequential, special, incidental, punitive or exemplary damages, howsoever arising, even if any member of the LSE Group Parties are advised in advance of the possibility of such damages or could have foreseen any such damages arising or resulting from the use of, or inability to use, the information contained in the publication. For the avoidance of doubt, the LSE Group Parties shall have no liability for any losses, claims, demands, actions, proceedings, damages, costs or expenses arising out of, or in any way connected with, the information contained in this document.
LSE Group is the owner of various intellectual property rights ("IPR”), including but not limited to, numerous trademarks that are used to identify, advertise, and promote LSE Group products, services and activities. Nothing contained herein should be construed as granting any licence or right to use any of the trademarks or any other LSE Group IPR for any purpose whatsoever without the written permission or applicable licence terms.