• 12 mins read

GenAI in Compliance: Insights from the Ripjar Summit London 2024

Ripjar hosted its London Summit on 12 March 2024, welcoming senior compliance professionals to an exclusive networking breakfast in the heart of the City. The Summit focused on the future of AI in compliance, with a panel discussion of the current challenges and opportunities presented by the technology, and a demonstration of Ripjar’s latest AI-powered screening technology.

Following an introduction by Ripjar’s CEO Jeremy Annis, the expert panel discussion was hosted by Gabriel Hopkins, Ripjar’s Chief Product Officer. This involved contributions from Dow Jones Executive Vice President and General Manager of Risk and Research Joel Lange, Fidelity Deputy MLRO and UK Head of FCC Advisory Brian Swainston, FINTRAIL Managing Director Maya Braine, and Ripjar General Manager of Labyrinth Screening Simon McClive.

Panel Discussion: Key Highlights

What worries you and your customers about screening and compliance?

Maya Braine pointed out that while most organisations know what they need to do, and what they need to do better, in terms of compliance, they often fall short simply because they don’t have the time, budget, or employee availability to push beyond business as usual. That problem extends to improving screening solutions with technology and automation, and the effort required to research and adopt possible new integrations. Maya also referenced the incoming Authorised Push Payment (APP) fraud regulations, framing them as a significant compliance consideration that many companies are currently scrambling to address. 

Brian Swainston mentioned a recent Financial Conduct Authority (FCA) “Dear Compliance Officer” (DCO) letter sent to all retail banks, that warned about weaknesses in anti-money laundering (AML) compliance measures. He noted that these official letters typically give firms an opportunity to review their own AML solutions, with a focus on drilling down and identifying weaknesses before regulators start pointing them out. 

Simon McClive brought up the issue of geopolitical upheaval, not least the “fast-paced sanctions” activity occurring around Russia’s invasion of Ukraine and Chinese regional developments. Simon talked about the emergence of secondary sanctions and sectoral sanctions (such as those targeting microchip development), which require firms to move very fast to achieve compliance. 

Joel Lange echoed Simon’s point about global sanctions screening, but also noted the speed with which a firm’s compliance burden could change. He referenced the invasion of Ukraine and the death of Alexei Navalny as examples of catalysts for rapid change in the sanctions landscape, and pointed to the frequency of important elections (“56 this year”) that would significantly affect financial services firms’ politically exposed person (PEP) screening burden. Joel suggested that the introduction of strict data protection laws, especially in Europe, would further complicate that screening challenge.

What should firms make of recent AML penalty actions, such as the $4.3 billion Binance fine?

Joel and Maya both pointed to the behaviour of cryptocurrency firms as a significant new regulator focus, and the seeming inevitability of new crypto compliance rules. Maya mentioned the possible impact of enforcement actions that do not involve a fine, using the example of a recent sanctions breach by Wise Payments in which the Office of Financial Sanctions Implementation (OFSI) named and shamed the organisation rather than issuing a financial penalty.

What do you make of the industry’s reaction to recent developments in AI technology?

With the emergence of Large Language Model (LLM) platforms, such as ChatGPT, over the past two years, Maya suggested that public and business perception of AI has evolved from a notion that it would “completely change the world” to a realisation that it’s “not going to replace and change what we do”. She pointed to a smaller scale adoption of AI tools for specific use cases such as tools for fraud detection and transaction monitoring.

While advances in generative AI have been impressive, Brian stressed the importance of firms understanding what they “need to do in the background” to get the most out of their new tools, including feeding those tools with high quality data. Applied correctly, he suggested that AI could make a substantial difference in the fight against money laundering and financial crime.

Where are the challenges of using generative AI in compliance?

While 2023 was “the year of generative AI”, Simon McClive noted the importance of applying the technology in coordination with other, more traditional AI solutions – and with an understanding of collective strengths and weaknesses. He highlighted specific applications of generative AI, including risk identification and content summarisation, but also warned of ‘hallucination’ issues which see AI platforms fabricate facts and provide false information, and of the ongoing difficulty of distinguishing between similar or exact match names. Simon suggested that, in order to be effective in compliance, generative AI tools need to be tailored to their firm’s needs and need to be “explainable” in the sense that compliance teams understand how their data is generated.

While generative AI offers dramatic efficiency gains, Joel said that many firms were wary of its potential liabilities and limitations including, for example, the need for proper attribution and provenance. In financial crime contexts, attribution of data is critical since firms must be able to explain to the authorities how they arrived at their compliance decisions.

Joel took this point further, referencing recent exploratory efforts by global news organisations to use LLMs in content production. He linked the potential value of LLM integration to the compliance community, who could leverage the technology to create new efficiencies when screening for adverse media, but also warned of the potential for LLM-generated content to be riddled with inaccurate or even false information. Joel suggested that solving the attribution and accuracy problems of LLM-generated content would benefit everyone, since provenance will ultimately be critical in the context of financial crime investigations.

Echoing Joel’s point, Brian took the perspective of front line compliance teams, working under pressure in complex systems, to assess risk and remediate alerts. In that context, the introduction of a new AI compliance tool is often disruptive and, with that in mind, many firms may currently be waiting for a first-mover to emerge, or may only be willing to move slowly or incrementally in integrating new tech.

How are regulators reacting to the introduction of AI in compliance?

Maya noted that there is currently no single, clear regulatory stance on the application of AI in compliance but those regulators that had taken positions had been “broadly positive”. She referenced both the FCA and the Monetary Authority of Singapore (MAS) as examples of regulators that were using AI in fraud and money laundering detection contexts, but said that there was no clear trend of authorities directing financial services firms to use AI in compliance solutions.

With an eye on the horizon, Maya suggested that the EU was probably closest to deploying an AI regulatory framework, following the proposal of the EU AI Law, which was agreed in December 2023. The EU law is principles-based and industry-agnostic, but is more robust than current regulations in the UK which are risk-based and often only reveal their substance after someone is found to have done something wrong.

Joel agreed that regulator enforcement actions remained the most useful way for firms to learn how to deploy AI in compliance, but noted that many regulators have also set up sandbox programmes and published guidance that firms can use to get on the right track. He suggested that many firms should seek to integrate AI as a way to augment their compliance decision-making systems, rather than replace components of it.

How should firms think about AI model governance?

Simon stressed the importance of good AI model governance, not least in helping firms answer questions like “how is this technology being applied to my data?”, “how is it helping me make decisions?” and “how is it performing over time?” He noted that good AI governance is not just about verifying that the AI technology is doing what its users think it’s doing, but being able to explain its application to authorities, and justify its deployment to stakeholders. 

Maya pointed back to the issue of resources as a critical AI governance consideration. Firms should ensure that they have the means to ensure the continued functioning and efficacy of their AI tools or, if they’re using a third party tool, ensure they have ongoing access to SMEs, and support and testing, rather than being directed to a sales team.

Download the GenAI Playbook

How should firms manage the increased compliance focus on adverse media?

Joel described a recent shift in expectations around adverse media screening, with regulators taking “a more overt, specific, and prescriptive approach”. He suggested that firms should pay close attention to both the detail of the guidance issued by regulators and to the subsequent enforcement actions taken. He used the example of guidance released by MAS in 2023, which directed firms to go beyond the use of databases or search engines for adverse media, and take a more rigorous approach to customer name screening. Similarly, he pointed to recent enforcement actions in France, and in the US in relation to the Jeffrey Epstein scandal, suggesting that regulators want firms to retain an audit trail for their adverse media data, in addition to its discovery. 

Simon returned focus to the Binance case, suggesting that the dramatic fine has prompted many organisations to reconsider the effectiveness of their own adverse media screening solutions, and the kind of criminal risks they are exposed to. He suggested that the adverse media risk should involve the third parties that clients do business with, and that firms should seek to expand the diversity of their adverse media solutions to better capture that risk, or find risk that they wouldn’t otherwise have spotted. 

What are your predictions for AI in compliance in 2024 and beyond?

Looking forward to the next 12 months, Maya suggested continued regulatory developments in both the EU and the US would go a long way in helping firms adopt AI technology as part of their compliance solutions. That trend would include the incremental adoption of “smaller, simpler AI tools”, and a focus on streamlining screening processes, including adverse media screening.

Brian raised the possibility of an early-adopter integrating some specific AI tool into their AML compliance solutions – a move that could push the entire industry forward. He also pointed to an increased need for employees with developed understanding of, and skill in, AI technology, and mentioned the broadening range of free and cheap AI educational resources, including the UK government’s initiatives to up-skill workers.

Simon predicted an increased number of “practical innovations” on the AI markets and the continued development and refinement of LLMs. As part of that trend, he suggested that, at some point in 2024, we might see the emergence of “small, domain-specific” LLMs, that could drive real value and efficiency for businesses.

Looking back at the increasing sophistication of LLMs, and the recent dramatic impact that the technology has already had across the world in fields such as physics and biology, Joel suggested that generative AI could issue in a paradigm shift in the next few years – with the compliance industry surely set for huge advancements and significant change.

Presentation: AI Compliance Innovations in Action

In an increasingly complex global risk landscape, firms need every advantage in meeting their adverse media screening challenges. With that challenge in mind, Ripjar CTO Joe Whitfield-Seed gave a presentation on Ripjar’s cutting-edge Labyrinth Screening platform, and the capabilities of its AI Risk Profiles, AI Summaries, and new Compliance Copilot.

AI Risk Profiles

One of the most common challenges associated with adverse media screening is the sheer amount of data that name searches can generate. The analysis of that data is typically time-consuming, while common and similar-sounding names elevate the risk of false positive alerts. 

Part of the Labyrinth Screening platform, AI Risk Profiles is designed to address the efficiency challenges of adverse media screening. AI Risk Profiles blends adverse media screening with structured name screening, to bring together critical AML risks including sanctions and PEPs. 

Joe used an example of a name search for  “Ali Jaafar”, noting there were at least two individuals by that name living in the US and involved in different financial crimes. The name is also extremely common globally and one with lots of similar-sounding near-matches and close spelling variations – which means that adverse media searches typically can generate tens of thousands of articles, many with no relevance to the target, or with redundant information. 

Built using different AI models, AI Risk Profiles captures contextual information in order to extract only the most relevant data points about a search target – such as Ali Jaafar – and then uses that information to create individual risk profiles complete with relationships and connections to other profiles. That profile is then built out with new information as that becomes available, while automatically discounting irrelevant or duplicate stories. AI Risk Profiles offers compliance teams a significant efficiency advantage, reducing the amount of data necessary to review by up to 99%, and providing an increase of effective recall of 5% or more.

AI Summaries

Layered on top of AI Risk Profiles is Labyrinth’s AI Summaries, which uses Ripjar’s generative AI model, Risk GPT, to create a clear, concise summary of a specific customer’s adverse media risk. With AI Risk Profiles as a trusted foundation, AI Summaries avoids the hallucination and fabrication issues that affect other LLMs, to ensure accuracy and efficiency during the screening process – and significantly reduce assessments times. 

Compliance Copilot news

Compliance Copilot

Newly launched at the London Summit, Compliance Copilot is also built on Ripjar’s RiskGPT LLM, and layered on AI Risk Profiles. 

Harnessing an ensemble of machine learning and AI techniques, including generative AI, Compliance Copilot is fine-tuned to evaluate identity and risk matches in a way that off-the-shelf LLMs cannot. Sitting alongside human compliance teams as a first line of defence, Compliance Copilot uses Ripjar’s best-in-class identity-matching technology and a vast amount of additional data signals to automatically assess customer screening results – escalating risks and discounting false positives in order to make the compliance journey significantly more efficient and more effective. 

Compliance Copilot’s potential is getting results. In a test involving an assessment of over 7,000 profile-evidence pairs, a team of 23 human compliance analysts correctly identified 87% of all risks, while discounting 90% of false positives. By comparison, Compliance Copilot found 97% of all risks while discounting 77% of false positives – in a fraction of the time. The results demonstrate the dramatic potential of AI-powered compliance technology – surpassing humans in some areas – and its game-changing  potential when deployed in conjunction with human expertise. 

Subscribe to Newsletter