LSEG Insights

Trustworthy AI needs trustworthy data 

LSEG and Eurasia Group

The fact that Artificial Intelligence (AI) runs on data is so oft-repeated as to be a truism, but data, and specifically data governance, are often treated as separate matters from AI in policy conversations.

To ensure AI can reach its full potential, we need to connect the dots between AI, data inputs and data policy.

In our ‘Trustworthy AI needs trustworthy data’ report, LSEG and Eurasia Group outline why data governance matters for AI and the main issues at stake. We highlight some innovative solutions that businesses are deploying and, finally, suggest considerations for both government and industry to foster responsible innovation and a pro-growth environment.    

This report was originally published by Eurasia Group on February 10, 2025.

LSEG’s responsible AI principles

  • Accurate and Reliable: Accuracy and reliability in AI involve the consistent performance or results of AI systems according to their intended functions and in varying conditions. The aim of this requirement is to ensure that AI systems perform correctly and dependably, delivering expected results to users.
  • Accountable and Auditable: Accountability and Auditability in AI reflect the extent to which information about an AI system and its outputs is available to individuals interacting with such a system. AI systems must have clear governance implemented and flag AI-generated outputs.
  • Safe: AI systems must be designed and tested to prevent harm to users and the environment, ensuring their operation does not pose undue risks. This principle involves identifying potential risks and actively working on their mitigation. 
  • Secure and Resilient: AI systems must be secured against unauthorised access and attacks, with robust measures to ensure their resilience and maintain the integrity of data and operations. AI systems must have protocols in place to avoid, respond to or protect from attacks. AI systems must not reveal LSEG intellectual property to unauthorised users.
  • Interpretable and Explainable: Interpretability and Explainability in AI involve detailing how the underlying (AI) technology works and how the AI model reached a given output. This principle focuses on offering users information which will help them understand the functionality, purpose, and potential limitations of an AI system.
  • Privacy-enhanced: AI systems must prioritise the protection of personal data, ensuring that user privacy is upheld through robust data handling and anonymisation techniques. This principle emphasises the protection of personal and sensitive information by AI systems, and compliance with existing privacy regulation and LSEG governance.
  • Fair with Bias Managed: Developers and users of AI should identify and mitigate biases in AI systems, which can otherwise lead to unfair outcomes. This principle focuses on the need to have fair AI systems that are in line with LSEG’s values and culture.

Read more about

Stay updated

Subscribe to an email recap from:

Legal Disclaimer

Republication or redistribution of LSE Group content is prohibited without our prior written consent. 

The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.

Copyright © 2024 London Stock Exchange Group. All rights reserved.