Ramesh Menon
- AI-driven fraud is rapidly coming to the forefront as organisations within the payments industry contend with several new threats to identity verification.
- The introduction of deepfakes, synthetic identities, and other AI-enabled tactics are being used to bypass traditional fraud protections.
- To effectively manage identity and payments risk, a comprehensive, real-time approach addressing the customer lifecycle is required.
AI-powered tools are swiftly finding their way into nearly every business process, promising increased productivity, processing power, and the ability to automate a growing number of repetitive tasks. However, AI-driven fraud is rapidly coming to the forefront as organisations within the payments industry contend with several new threats to identity verification.
Merchants, payment processors, and financial institutions must become aware of these new threats and take appropriate measures to protect their customers and employees. Otherwise, they risk significant losses, reputational damage, and regulatory penalties. Below are two of the latest threats that have emerged since the introduction of widely available AI tools.
Deepfakes: The Latest Trend in Account Takeover
Deepfakes are sophisticated digital manipulations created using AI and have been successfully used to forge documents, mimic celebrities and politicians, and even create highly believable audio and video content of events that never occurred. In the context of identity verification, deepfakes can be used in social engineering, to gain access to existing accounts, create new accounts using a forged identity of a real person, and intercept transactions.
Whereas many of these methods were easy to spot even as recently as a year ago, the technology has developed so quickly that these fakes can now effectively deceive both human and machine countermeasures. With the rapid spread of AI tools used to develop all manner of deepfakes, these attacks are becoming increasingly widespread. In just the last year, there was a 1,740% increase in deepfake fraud in North America.[1]
Synthetic Identities: Powered by AI
Synthetic identities have long been among the most difficult challenges to identity verification. However, the combination of stolen personal identifiable information (PII) available on the Dark Web and the virtually unlimited processing power of AI tools have never made the ability for fraud operators to create, test, and deploy synthetic identities greater. AI tools have empowered fraud networks to use synthetic identities en masse, creating new fictitious accounts at an unprecedented speed.
Once fraudulent accounts are created, they can be used for countless types of fraud, including the purchase of goods and services, facilitating illegal money transfers, establishing lines of credit, and more. Identifying synthetic identities and stopping fraud operators at the point of enrolment is critical to stopping fraud in its tracks. Once a fake identity has been successfully used to create a new account, it is much harder and costlier to correct.
Identity Verification in the Age of AI
As businesses increasingly cater to post-pandemic consumer preferences for rapid, digital-first interactions, the speed and potential costs of fraud grow exponentially. The convergence of faster payment adoption, consumer expectations of convenience above all else, and AI-powered tools for fraud have created a new risk environment for any organisation in the financial ecosystem. Understanding these new risks is critical to staying ahead of fraud attempts.
Innovation in financial technology has provided organisations and customers with seamless, faster payments and a far superior user experience. However, fraud and innovation operate hand in hand. Businesses have to continuously balance the competing needs of minimizing friction for users, while preventing fraud – and the solution requires a comprehensive, real-time approach across the customer lifecycle.
Key to this approach are data-driven tools that combat AI-driven artifacts and identities. The three pillars of the approach involve the ability to:
- Trust the identity of the client, via data-based verification to supplement document verification and biometrics;
- Trust the accounts, via bank account verification and account ownership verification; and,
- Trust the interaction, via email, phone, location and other signals, in conjunction with analytics-based insight for anomaly identification.
Focusing on these three pillars can help organisations protect themselves against emerging threats in the most robust and comprehensive way.
Legal Disclaimer
Republication or redistribution of LSE Group content is prohibited without our prior written consent.
The content of this publication is for informational purposes only and has no legal effect, does not form part of any contract, does not, and does not seek to constitute advice of any nature and no reliance should be placed upon statements contained herein. Whilst reasonable efforts have been taken to ensure that the contents of this publication are accurate and reliable, LSE Group does not guarantee that this document is free from errors or omissions; therefore, you may not rely upon the content of this document under any circumstances and you should seek your own independent legal, investment, tax and other advice. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon.
Copyright © 2024 London Stock Exchange Group. All rights reserved.
The content of this publication is provided by London Stock Exchange Group plc, its applicable group undertakings and/or its affiliates or licensors (the “LSE Group” or “We”) exclusively.
Neither We nor our affiliates guarantee the accuracy of or endorse the views or opinions given by any third party content provider, advertiser, sponsor or other user. We may link to, reference, or promote websites, applications and/or services from third parties. You agree that We are not responsible for, and do not control such non-LSE Group websites, applications or services.
The content of this publication is for informational purposes only. All information and data contained in this publication is obtained by LSE Group from sources believed by it to be accurate and reliable. Because of the possibility of human and mechanical error as well as other factors, however, such information and data are provided "as is" without warranty of any kind. You understand and agree that this publication does not, and does not seek to, constitute advice of any nature. You may not rely upon the content of this document under any circumstances and should seek your own independent legal, tax or investment advice or opinion regarding the suitability, value or profitability of any particular security, portfolio or investment strategy. Neither We nor our affiliates shall be liable for any errors, inaccuracies or delays in the publication or any other content, or for any actions taken by you in reliance thereon. You expressly agree that your use of the publication and its content is at your sole risk.
To the fullest extent permitted by applicable law, LSE Group, expressly disclaims any representation or warranties, express or implied, including, without limitation, any representations or warranties of performance, merchantability, fitness for a particular purpose, accuracy, completeness, reliability and non-infringement. LSE Group, its subsidiaries, its affiliates and their respective shareholders, directors, officers employees, agents, advertisers, content providers and licensors (collectively referred to as the “LSE Group Parties”) disclaim all responsibility for any loss, liability or damage of any kind resulting from or related to access, use or the unavailability of the publication (or any part of it); and none of the LSE Group Parties will be liable (jointly or severally) to you for any direct, indirect, consequential, special, incidental, punitive or exemplary damages, howsoever arising, even if any member of the LSE Group Parties are advised in advance of the possibility of such damages or could have foreseen any such damages arising or resulting from the use of, or inability to use, the information contained in the publication. For the avoidance of doubt, the LSE Group Parties shall have no liability for any losses, claims, demands, actions, proceedings, damages, costs or expenses arising out of, or in any way connected with, the information contained in this document.
LSE Group is the owner of various intellectual property rights ("IPR”), including but not limited to, numerous trademarks that are used to identify, advertise, and promote LSE Group products, services and activities. Nothing contained herein should be construed as granting any licence or right to use any of the trademarks or any other LSE Group IPR for any purpose whatsoever without the written permission or applicable licence terms.