LSEG Insights

Why the global financial system needs high-quality data it can trust

David Schwimmer

CEO, LSEG
  • In artificial intelligence (AI), the value of data isn’t just in its volume but in its integrity and trustworthiness – poor data leads to unreliable results and AI risks such as hallucinations and bias.
  • Data transparency, security, and integrity—such as “watermarking” for financial data—are critical for compliance, customer confidence, and effective AI deployment.
  • Industry-wide coordination, standardised definitions of “data trust”, and interoperable regulations are essential to fostering reliable AI systems and scaling global financial innovation.

More than a century ago, reels of ticker tape were considered the cutting-edge of real-time data technology. Today, digital data serves as the lifeblood of the global and economic financial system. However, without pinpoint accuracy and trust in that data, we risk detrimental consequences for the whole economy.

As a global data and analytics provider, LSEG (London Stock Exchange Group) delivers around 300 billion data messages to customers across 190 markets daily, including 7.3 million price updates per second.

We are also seeing how AI is transforming finance. It’s supercharging productivity internally and in our customers’ products, enhancing financial workflows by boosting efficiency, enabling more informed decisions, and strengthening the customer experience.

As the financial services sector continues to explore the possibilities of AI, there is an enormous appetite for data. This continues to grow: customer demand for our data has risen by around 40% per year since 2019.

But without the right data, even the best algorithms can deliver mediocre, or worse, misinformed results. Poor quality data increases the risk of AI hallucinations, model drift and unintended bias. The growing complexity of contracts and rights management in this field creates inherent challenges in avoiding licensing or contractual breaches.

Building on data integrity and digital rights

There are great new opportunities for processing large unstructured datasets through generative artificial intelligence (GenAI) models, but their worth is limited without trustworthy and licensed data. Data in GenAI isn’t just a quantity game; it’s a quality game.

Many businesses are critically considering how to embrace AI opportunities with high-quality data. At LSEG, we’ve developed a multi-layered strategy that may help guide others in the financial services industry.

The first layer is ensuring data integrity and relevance, which are critical requirements in large language models (LLMs). “GPT-ready” datasets – curated and validated by trusted data providers – are in high demand, and we expect that demand will grow as more businesses explore GenAI’s uses.

High-integrity data acts as a security net when working with LLMs and other AI applications.

The second layer is digital rights management. Customers expect solutions that verify which sources can or cannot be used in LLMs, govern responsible AI policies, protect against IP infringement and differentiate usage rights.

Trust and transparency in financial data

These layers are underpinned by “data trust,” an approach to data that is built on the foundation of information transparency, security, and integrity.

When data leads to big decisions, customers need peace of mind to track where data is coming from and ensure that it is secure, reliable and able to meet regulatory and compliance standards. Put simply, it’s “watermarking” for financial data.

All financial services companies must raise the bar on the calibre of their data.

To increase trust in data across the industry, we need greater standardisation, coordination, and a stable regulatory environment, underpinned by clear principles on AI’s responsible and ethical use.

The more standardised the industry definition of data trust, the easier it will be to ensure the flow of high-quality data. If the core principles of transparency, security and integrity of information are applied to the standard of data, we will be able to foster real-time, pinpoint accuracy across the sector.

For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that’s going into it.

Laying the ethical groundwork for innovation

The industry should aim for the highest level of transparency so that customers can see what a dataset contains, who owns it, and how it is licensed for use.

Regulations such as the European Union’s AI Act and the Digital Operational Resilience Act introduce safeguards, clear accountability and a focus on governance and preparedness in financial services.

Voluntary guidance, including the National Institute of Standards and Technology’s AI Risk Management Framework in the United States, can also help organizations measure and manage risks to AI systems and data.

It’s clear these regulations serve as good starting points for how the financial sector should continue to develop safe and fair practices of AI. They have inspired our own Responsible AI Principles at LSEG.

Moving forward, policymakers must recognize the need for high-quality data as we develop the AI-enabled tools of the future.

We support the use of internationally agreed-upon definitions relevant to AI and data. We also need more rigorous parameters for managing intellectual property and digital rights.

 

The path to global AI regulation

At the same time, regulatory requirements for technology must be more interoperable. The more specific the rules, the more difficult it is for global companies to scale up quickly.

When companies need to make business decisions in different jurisdictions, this can impact everything from the location of a data centre to the choice of a cloud provider.

As AI technology develops, policymakers should ensure legislation is flexible enough to align with other jurisdictions while remaining relevant for upcoming AI use cases.

None of this will be easy, but businesses in the financial and tech sectors, regulators, and consumers can all contribute to this conversation. We will need varying expertise and understanding as we embrace the technology that will alter our lives.

For AI to meet its potential in addressing the world’s biggest challenges, we must be able to trust the data that’s going into it.

 

First published by the World Economic Forum on 20 January 2025.

 

Read more about

Stay updated

Subscribe to an email recap from:

Legal Disclaimer

Republication or redistribution of LSE Group content is prohibited without our prior written consent. 

The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.

Copyright © 2024 London Stock Exchange Group. All rights reserved.

Related articles