Jenny Cosco
Sabrina Feng
- Policymakers have taken welcome steps to agree on and communicate what is required to make responsible, safe and trustworthy use of AI. Businesses need time to take this guidance on board, customise their internal processes and ensure uniform responsible AI development and deployment across jurisdictions.
- To avoid fragmentation in approaches, the G7 and policymakers more broadly should take stock of progress, and harmonise and streamline frameworks before the next phase of policy development, paving the way towards a unified roadmap for responsible AI policy.
How policymakers are helping to manage AI risks and opportunities
On 13 June, G7 Leaders will arrive in Apulia, Italy for this year’s annual Summit. Without a doubt, much of their focus will be on a collective response to the increasingly tense geopolitical environment. With growing regional conflicts, pervasive economic instability, and the omnipresent risk of further supply chain disruptions, coordination between nations is as important as ever. The need for international cooperation also applies to cross-cutting technological advancements that are leading to tectonic shifts, most prominently in artificial intelligence (AI).
AI has unprecedented potential to transform our societies and economies, including the financial services industry that LSEG serves. Introducing AI functionalities into platforms and workflows can significantly increase productivity and enable the creation of more valuable insights to inform strategies and decision-making. AI has the potential to open up access to the digital economy and, with trusted data, improve peoples’ livelihoods.
But like many other transformative technological developments, AI does not come without risks. In recent years, policymakers have taken steps to agree on what’s required to make responsible, safe and trustworthy use of this technology. The Bletchley Declaration on AI safety, the G7’s International Guiding Principles and voluntary Code of Conduct on AI, the OECD’s updated AI Principles, the White House’s Executive Order on AI and the EU’s AI Act, are just a few examples.
There is now an array of valuable guidance in place. This enable businesses to consider the risks and opportunities of AI and take the appropriate steps to develop their own sophisticated approach. But businesses need time to take this guidance on board, customise their internal processes and ensure uniform responsible AI development and deployment across jurisdictions.
Now, governments should pause to take stock of and build on existing initiatives, coordinate approaches, and streamline work where possible. The onus is not just on policymakers to shape the future of AI: maximising the benefits of AI requires businesses to demonstrate clearly that they will use it responsibly. Doing so will enable businesses to scale their governance and work closely with policymakers on the development of practical AI regulation that ensures the safety and privacy of users and consumers.
LSEG’s Responsible AI Principles
LSEG has developed our own set of Responsible AI Principles, which draw on existing frameworks to guide our approach to AI adoption. Through our Principles, we aim to enable the safe and responsible use of AI for our employees and customers to accelerate growth and enhance delivery capabilities throughout the financial services lifecycle.
One of the tools that LSEG has used to inform this effort is the US National Institute of Standards and Technology’s AI Risk Management Framework.[1] Released in January 2023, this framework is designed to be updated as AI technologies evolve. The four “Core” functions to address risks in AI systems are to Govern, Map, Measure, and Manage. These are overlayed in the LSEG AI risk management framework. We are also considering how “high risk” is defined across emerging regulations around the world, including in the EU’s AI Act,[2] to inform our approach to responsible AI. To encourage the right internal approach, we now have internal controls that can help us to identify and address various levels of risk from the earliest stages of development.
LSEG’s Responsible AI Principles
- Accurate and Reliable: Accuracy and reliability in AI involve the consistent performance or results of AI systems according to their intended functions and in varying conditions. The aim of this requirement is to ensure that AI systems perform correctly and dependably, delivering expected results to users.
- Accountable and Auditable: Accountability and Auditability in AI reflect the extent to which information about an AI system and its outputs is available to individuals interacting with such a system. AI systems must have clear governance implemented and flag AI-generated outputs.
- Safe: AI systems must be designed and tested to prevent harm to users and the environment, ensuring their operation does not pose undue risks. This principle involves identifying potential risks and actively working on their mitigation.
- Secure & Resilient: AI systems must be secured against unauthorised access and attacks, with robust measures to ensure their resilience and maintain the integrity of data and operations. AI systems must have protocols in place to avoid, respond to or protect from attacks. AI systems must not reveal LSEG intellectual property to unauthorised users.
- Interpretable and Explainable: Interpretability and Explainability in AI involve detailing how the underlying (AI) technology works and how the AI model reached a given output. This principle focuses on offering users information which will help them understand the functionality, purpose, and potential limitations of an AI system.
- Privacy-enhanced: AI systems must prioritise the protection of personal data, ensuring that user privacy is upheld through robust data handling and anonymisation techniques. This principle emphasises the protection of personal and sensitive information by AI systems, and compliance with existing privacy regulation and LSEG governance.
- Fair with Bias Managed: Developers and users of AI should identify and mitigate biases in AI systems, which can otherwise lead to unfair outcomes. This principle focuses on the need to have fair AI systems that are in line with LSEG’s values and culture.
The way forward for the next evolution of AI policy
Businesses across the economy are already working to implement AI guidance and regulations in their processes. Now, there is an opportunity for the G7, and other fora such as the OECD, to streamline existing public-private initiatives and feed back into the policymaking process. We welcome OECD.AI efforts to consult industry stakeholders via its expert groups and to continue updating the OECD AI principles and definitions in line with new technological developments.[3] The collective commitment in the wake of the AI Seoul Summit to develop a global scientific network for AI safety is also an encouraging step. Facilitating this type of multistakeholder collaboration will help to clear the path towards a global approach to responsible AI and mitigate the risk of policy fragmentation.
Pressing ahead at the domestic level without accounting for existing international efforts could be counterproductive, leading to inconsistent policy approaches that undermine industry’s efforts to maximise the benefits and minimise the risks of AI. Businesses require legal certainty when investing in resources for AI implementation and need to be able to prioritise AI safety rather than navigating conflicting approaches.
Looking ahead, the G7 should take steps to establish a unified roadmap for AI policy in partnership with the OECD. Simplifying the work conducted across jurisdictions and promoting interoperability for AI governance would provide the building blocks for mutual recognition agreements on AI and assist organisations in harnessing the benefits of AI. Many businesses are proactively seeking to integrate AI safety into their operations, but a rapid proliferation of conflicting guidance and policy frameworks may undermine their efforts. The G7 can help to address this challenge by leading the discussion on harmonising approaches to responsible AI to ensure widespread adoption of governance frameworks that support interoperability and that positively contribute to global economic growth and innovation.
1. NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
2. EU AI Act: first regulation on artificial intelligence | Topics | European Parliament (europa.eu)
3. The OECD Artificial Intelligence Policy Observatory - OECD.AI
Legal Disclaimer
Republication or redistribution of LSE Group content is prohibited without our prior written consent.
The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.
Copyright © 2024 London Stock Exchange Group. All rights reserved.
The content of this publication is provided by London Stock Exchange Group plc, its applicable group undertakings and/or its affiliates or licensors (the “LSE Group” or “We”) exclusively.
Neither We nor our affiliates guarantee the accuracy of or endorse the views or opinions given by any third party content provider, advertiser, sponsor or other user. We may link to, reference, or promote websites, applications and/or services from third parties. You agree that We are not responsible for, and do not control such non-LSE Group websites, applications or services.
The content of this publication is for informational purposes only. All information and data contained in this publication is obtained by LSE Group from sources believed by it to be accurate and reliable. Because of the possibility of human and mechanical error as well as other factors, however, such information and data are provided "as is" without warranty of any kind. You understand and agree that this publication does not, and does not seek to, constitute advice of any nature. You may not rely upon the content of this document under any circumstances and should seek your own independent legal, tax or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon. You expressly agree that your use of the publication and its content is at your sole risk.
To the fullest extent permitted by applicable law, LSE Group, expressly disclaims any representation or warranties, express or implied, including, without limitation, any representations or warranties of performance, merchantability, fitness for a particular purpose, accuracy, completeness, reliability and non-infringement. LSE Group, its subsidiaries, its affiliates and their respective shareholders, directors, officers employees, agents, advertisers, content providers and licensors (collectively referred to as the “LSE Group Parties”) disclaim all responsibility for any loss, liability or damage of any kind resulting from or related to access, use or the unavailability of the publication (or any part of it); and none of the LSE Group Parties will be liable (jointly or severally) to you for any direct, indirect, consequential, special, incidental, punitive or exemplary damages, howsoever arising, even if any member of the LSE Group Parties are advised in advance of the possibility of such damages or could have foreseen any such damages arising or resulting from the use of, or inability to use, the information contained in the publication. For the avoidance of doubt, the LSE Group Parties shall have no liability for any losses, claims, demands, actions, proceedings, damages, costs or expenses arising out of, or in any way connected with, the information contained in this document.
LSE Group is the owner of various intellectual property rights ("IPR”), including but not limited to, numerous trademarks that are used to identify, advertise, and promote LSE Group products, services and activities. Nothing contained herein should be construed as granting any licence or right to use any of the trademarks or any other LSE Group IPR for any purpose whatsoever without the written permission or applicable licence terms.