LSEG Insights

Towards a unified roadmap for responsible AI policy

Jenny Cosco 

Global Head of Government Relations & Regulatory Strategy

Sabrina Feng

CRO, Technology, Cyber and Resilience
  • Policymakers have taken welcome steps to agree on and communicate what is required to make responsible, safe and trustworthy use of AI. Businesses need time to take this guidance on board, customise their internal processes and ensure uniform responsible AI development and deployment across jurisdictions.
  • To avoid fragmentation in approaches, the G7 and policymakers more broadly should take stock of progress, and harmonise and streamline frameworks before the next phase of policy development, paving the way towards a unified roadmap for responsible AI policy.

How policymakers are helping to manage AI risks and opportunities

On 13 June, G7 Leaders will arrive in Apulia, Italy for this year’s annual Summit. Without a doubt, much of their focus will be on a collective response to the increasingly tense geopolitical environment. With growing regional conflicts, pervasive economic instability, and the omnipresent risk of further supply chain disruptions, coordination between nations is as important as ever. The need for international cooperation also applies to cross-cutting technological advancements that are leading to tectonic shifts, most prominently in artificial intelligence (AI).

AI has unprecedented potential to transform our societies and economies, including the financial services industry that LSEG serves. Introducing AI functionalities into platforms and workflows can significantly increase productivity and enable the creation of more valuable insights to inform strategies and decision-making. AI has the potential to open up access to the digital economy and, with trusted data, improve peoples’ livelihoods.

But like many other transformative technological developments, AI does not come without risks. In recent years, policymakers have taken steps to agree on what’s required to make responsible, safe and trustworthy use of this technology. The Bletchley Declaration on AI safety, the G7’s International Guiding Principles and voluntary Code of Conduct on AI, the OECD’s updated AI Principles, the White House’s Executive Order on AI and the EU’s AI Act, are just a few examples.

There is now an array of valuable guidance in place. This enable businesses to consider the risks and opportunities of AI and take the appropriate steps to develop their own sophisticated approach. But businesses need time to take this guidance on board, customise their internal processes and ensure uniform responsible AI development and deployment across jurisdictions.

Now, governments should pause to take stock of and build on existing initiatives, coordinate approaches, and streamline work where possible. The onus is not just on policymakers to shape the future of AI: maximising the benefits of AI requires businesses to demonstrate clearly that they will use it responsibly. Doing so will enable businesses to scale their governance and work closely with policymakers on the development of practical AI regulation that ensures the safety and privacy of users and consumers.

LSEG’s Responsible AI Principles

LSEG has developed our own set of Responsible AI Principles, which draw on existing frameworks to guide our approach to AI adoption. Through our Principles, we aim to enable the safe and responsible use of AI for our employees and customers to accelerate growth and enhance delivery capabilities throughout the financial services lifecycle.

One of the tools that LSEG has used to inform this effort is the US National Institute of Standards and Technology’s AI Risk Management Framework.[1] Released in January 2023, this framework is designed to be updated as AI technologies evolve. The four “Core” functions to address risks in AI systems are to Govern, Map, Measure, and Manage. These are overlayed in the LSEG AI risk management framework. We are also considering how “high risk” is defined across emerging regulations around the world, including in the EU’s AI Act,[2] to inform our approach to responsible AI. To encourage the right internal approach, we now have internal controls that can help us to identify and address various levels of risk from the earliest stages of development. 

LSEG’s Responsible AI Principles

  • Accurate and Reliable: Accuracy and reliability in AI involve the consistent performance or results of AI systems according to their intended functions and in varying conditions. The aim of this requirement is to ensure that AI systems perform correctly and dependably, delivering expected results to users.
  • Accountable and Auditable: Accountability and Auditability in AI reflect the extent to which information about an AI system and its outputs is available to individuals interacting with such a system. AI systems must have clear governance implemented and flag AI-generated outputs.
  • Safe: AI systems must be designed and tested to prevent harm to users and the environment, ensuring their operation does not pose undue risks. This principle involves identifying potential risks and actively working on their mitigation. 
  • Secure & Resilient: AI systems must be secured against unauthorised access and attacks, with robust measures to ensure their resilience and maintain the integrity of data and operations. AI systems must have protocols in place to avoid, respond to or protect from attacks. AI systems must not reveal LSEG intellectual property to unauthorised users.
  • Interpretable and Explainable: Interpretability and Explainability in AI involve detailing how the underlying (AI) technology works and how the AI model reached a given output. This principle focuses on offering users information which will help them understand the functionality, purpose, and potential limitations of an AI system.
  • Privacy-enhanced: AI systems must prioritise the protection of personal data, ensuring that user privacy is upheld through robust data handling and anonymisation techniques. This principle emphasises the protection of personal and sensitive information by AI systems, and compliance with existing privacy regulation and LSEG governance.
  • Fair with Bias Managed: Developers and users of AI should identify and mitigate biases in AI systems, which can otherwise lead to unfair outcomes. This principle focuses on the need to have fair AI systems that are in line with LSEG’s values and culture.

The way forward for the next evolution of AI policy

Businesses across the economy are already working to implement AI guidance and regulations in their processes. Now, there is an opportunity for the G7, and other fora such as the OECD, to streamline existing public-private initiatives and feed back into the policymaking process. We welcome OECD.AI efforts to consult industry stakeholders via its expert groups and to continue updating the OECD AI principles and definitions in line with new technological developments.[3]  The collective commitment in the wake of the AI Seoul Summit to develop a global scientific network for AI safety is also an encouraging step. Facilitating this type of multistakeholder collaboration will help to clear the path towards a global approach to responsible AI and mitigate the risk of policy fragmentation.

Pressing ahead at the domestic level without accounting for existing international efforts could be counterproductive, leading to inconsistent policy approaches that undermine industry’s efforts to maximise the benefits and minimise the risks of AI. Businesses require legal certainty when investing in resources for AI implementation and need to be able to prioritise AI safety rather than navigating conflicting approaches.

Looking ahead, the G7 should take steps to establish a unified roadmap for AI policy in partnership with the OECD. Simplifying the work conducted across jurisdictions and promoting interoperability for AI governance would provide the building blocks for mutual recognition agreements on AI and assist organisations in harnessing the benefits of AI. Many businesses are proactively seeking to integrate AI safety into their operations, but a rapid proliferation of conflicting guidance and policy frameworks may undermine their efforts. The G7 can help to address this challenge by leading the discussion on harmonising approaches to responsible AI to ensure widespread adoption of governance frameworks that support interoperability and that positively contribute to global economic growth and innovation.

 

1. NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) 

2. EU AI Act: first regulation on artificial intelligence | Topics | European Parliament (europa.eu)

3. The OECD Artificial Intelligence Policy Observatory - OECD.AI

Read more about

Stay updated

Subscribe to an email recap from:

Legal Disclaimer

Republication or redistribution of LSE Group content is prohibited without our prior written consent. 

The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.

Copyright © 2024 London Stock Exchange Group. All rights reserved.