AI continues to sweep across numerous industries, offering ever-evolving tools to enhance speed and efficiency. The potential uses and applications of this technology are vast, but can generative AI streamline and optimise the enhanced due diligence (EDD) process without inadvertently increasing risk?
- EDD is a vital additional risk management step where heightened risk is identified and reviewed.
- AI can optimise the EDD research process, but it is essential to adopt a low-risk approach when using this technology in the due diligence space.
- A low-risk approach must retain a core element of human oversight.
Why EDD matters
Every customer or third-party relationship has the potential to introduce risk to your organisation, and this means that due diligence to identify possible financial crime and sanctions risk – and understand exactly who you’re doing business with – is an imperative step in every business relationship.
Where heightened risk is suspected or detected, initial screening may not be enough, and EDD becomes necessary. EDD reports deliver vital information about your most critical or risky relationships and can be crucial when deciding whether to engage with a new customer or third party.
Specific instances that should trigger EDD include:
- Where the customer or third party is high risk
- Where there is high value attached to the relationship
- Where initial findings suggest that red flags may be present
Robust EDD offers additional insights that typical initial screening does not. These reports include risk-relevant information gathered from a variety of credible sources and deliver detailed background information and contextualised insights on subjects.
EDD, when executed well, helps you make informed decisions about engaging with new customers or third parties while meeting compliance and regulatory obligations.
However, the EDD research process is time-consuming and resource intensive. Rushed reports raise the risk of quality errors that could result in considerable damage to your company. It is therefore important to consider how innovation can be harnessed to conduct high-quality EDD with speed and efficiency.
AI offers a viable solution, but is there a risk attached to using AI in EDD?
Adopting a low-risk approach
All new technologies come with a degree of risk – and AI is no exception. There are, understandably, several concerns around the safe application of AI to business processes in general, and to EDD in particular.
Adopting a low-risk approach can offer EDD researchers the tools they need to work smarter, work faster and deliver more robust EDD reports quickly, but what does such an approach look like?
Essentially, a low-risk approach means taking steps to avoid common pitfalls – such as AI hallucinations and bias, which is the biggest risk when using AI.
Hallucinations are the tendency of some generative AI models to deliver incorrect information where output errors appear legitimate.
AI bias refers to results that skew the original training data or AI algorithm, leading to distorted outputs and potentially harmful outcomes. Bias can creep into algorithms in multiple ways, for example, via training data, which can include skewed human decisions or reflect historical or social inequities. Bias can also be introduced in the AI model via flawed data sampling.
So how can you streamline and optimise the EDD process using AI, without introducing more risk?
The safest approach is “grounding”, which means setting restrictions for the input of data. In the EDD process, the input provided to the AI writing tool can be grounded to research elements completed by an analyst using their existing research methodology. The AI can then simply use the collected data – that has been quality assured by human analysts – to create a summary.
The core element: human oversight
The key ingredient to minimise risk is human oversight. Where the application of generative AI capabilities simply creates a tool to augment the writing process, but analysts retain full control and accountability, risk is low.
This approach optimises the more basic elements of report construction and frees analysts to concentrate their attention on areas of greater value-add, such as analysing the collected data and distilling the insights that drive better decisions.
The benefits are many: leveraging a combination of generative AI and existing research capabilities within the report writing process boosts efficiency across an array of day-to-day tasks, such as creating executive summaries more efficiently. This approach also helps to improve writing quality and consistency across reports.
At LSEG we have adopted a low-risk approach to integrating AI into our existing EDD report processes. This conservative use of AI augments and optimises our in-house EDD capabilities, which are supported by analysts speaking 60+ languages across 200+ jurisdictions and producing tailored reports to meet business-critical needs.
By leveraging this powerful combination of leading technology and trusted human intelligence, we deliver 30, 000 EDD reports a year, enabling businesses to understand and mitigate dynamic risk without slowing the pace of business.
Legal Disclaimer
Republication or redistribution of LSE Group content is prohibited without our prior written consent.
The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.
Copyright © 2024 London Stock Exchange Group. All rights reserved.
The content of this publication is provided by London Stock Exchange Group plc, its applicable group undertakings and/or its affiliates or licensors (the “LSE Group” or “We”) exclusively.
Neither We nor our affiliates guarantee the accuracy of or endorse the views or opinions given by any third party content provider, advertiser, sponsor or other user. We may link to, reference, or promote websites, applications and/or services from third parties. You agree that We are not responsible for, and do not control such non-LSE Group websites, applications or services.
The content of this publication is for informational purposes only. All information and data contained in this publication is obtained by LSE Group from sources believed by it to be accurate and reliable. Because of the possibility of human and mechanical error as well as other factors, however, such information and data are provided "as is" without warranty of any kind. You understand and agree that this publication does not, and does not seek to, constitute advice of any nature. You may not rely upon the content of this document under any circumstances and should seek your own independent legal, tax or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon. You expressly agree that your use of the publication and its content is at your sole risk.
To the fullest extent permitted by applicable law, LSE Group, expressly disclaims any representation or warranties, express or implied, including, without limitation, any representations or warranties of performance, merchantability, fitness for a particular purpose, accuracy, completeness, reliability and non-infringement. LSE Group, its subsidiaries, its affiliates and their respective shareholders, directors, officers employees, agents, advertisers, content providers and licensors (collectively referred to as the “LSE Group Parties”) disclaim all responsibility for any loss, liability or damage of any kind resulting from or related to access, use or the unavailability of the publication (or any part of it); and none of the LSE Group Parties will be liable (jointly or severally) to you for any direct, indirect, consequential, special, incidental, punitive or exemplary damages, howsoever arising, even if any member of the LSE Group Parties are advised in advance of the possibility of such damages or could have foreseen any such damages arising or resulting from the use of, or inability to use, the information contained in the publication. For the avoidance of doubt, the LSE Group Parties shall have no liability for any losses, claims, demands, actions, proceedings, damages, costs or expenses arising out of, or in any way connected with, the information contained in this document.
LSE Group is the owner of various intellectual property rights ("IPR”), including but not limited to, numerous trademarks that are used to identify, advertise, and promote LSE Group products, services and activities. Nothing contained herein should be construed as granting any licence or right to use any of the trademarks or any other LSE Group IPR for any purpose whatsoever without the written permission or applicable licence terms.