Blog > Ripjar Summit Singapore 2023: Challenges and Innovations in Customer Screening

4th October 2023


Ripjar Summit Singapore 2023: Challenges and Innovations in Customer Screening

In September 2023, Ripjar’s latest Summit took place at Singapore’s Swissôtel The Stamford, overlooking the city’s scenic Marina Bay. Senior financial compliance professionals from around the world attended the Singapore Summit, which included an exclusive breakfast and networking event, followed by a discussion on the latest innovations, challenges, and trends in customer screening, and a demo of Ripjar’s AI Risk Profiles solution.

Panel Discussion

Ripjar Chief Product Officer Gabriel Hopkins moderated the panel discussion, which included industry tech, data, and compliance experts Josh Heiliczer (PWC Managing Director), Andrew Chow (Synpulse Senior Advisor), and Simon McClive (Ripjar General Manager of Labyrinth Screening). 

The panel theme was ‘Innovation in Screening’ – here are some of the key highlights from the discussion.

How is customer screening changing?

Picking up the first discussion point, Andrew Chow highlighted the changing role of technology in driving fundamental change in modern screening strategies. He talked about the advent of public-private partnerships and the large data sets now available as part of those arrangements.  Andrew spoke on the need to think about the accuracy of data and referenced the recent so-called Fujian gang scandal, which is now thought to involve at least $2.4 billion in laundered money. When the perpetrators of the scandal first arrived in Singapore, it is likely that the banks failed to accurately understand their links to China due to their nationalities. 

Andrew provided an example from his own experience of carrying out a KYC check on a customer with a St Kitts and Nevis passport. After running the check, he later found out that the customer was a Chinese national, and had obtained a second passport. The incident highlighted the importance of data accuracy in customer screening.

Josh Heiliczer echoed the need for screening accuracy. He noted the importance of adverse media screening, as well as other public domain data sources, and even social media, in identifying the signals, and addressing the scale of international money laundering risk. He also noted that the accuracy problem may be attributed to an increase in false positives: compliance teams can reduce false positives by using secondary “identifiers”, and cross-script matching (which can also improve matching accuracy). Josh highlighted the need for firms to have a risk appetite framework in place, outlining which sources they are using to carry out effective screening. Those sources may vary depending on the customer’s region.

Simon McClive noted that firms increasingly have to deal with rapid changes in their compliance burdens, and used the example of recent Russia sanctions, which saw some organisations forced to upscale their screening solutions to accommodate thousands of new entities in a matter of weeks. Simon pointed out that firms need to have the processes and resources in place to cope with that kind of rapid change, all the while considering factors like new foreign language screening requirements and data quality, to ensure they’re building an accurate picture of the risks they face.  

What lessons can we learn from recent money laundering scandals?

Andrew Chow stressed the importance of banks and financial institutions never assuming that they are “100% protected” from criminal risk. He added that those institutions must understand that new threats will always emerge. In the Fujian case, inflows appear to have come from other countries in Asia, and the banks involved had also not adequately identified the source of funds. He suggested that without the use of the latest technology, the scam may not have been discovered. Furthermore, the subsequent asset recovery effort currently stands at over $2 billion, which is significant by general standards.

Josh Heiliczer commented on the seizure of funds, contrasting the Fujin total with the estimated $275 billion that banks spend globally each year to tackle money laundering, and to the estimated $5 trillion of funds which are laundered. In summary, he suggested that the “cost of laundering is about 1.5% right now,” adding that “when I started in this business, it was 20-30%.”  

Josh went on to talk about the way that money was moved across Asia, conducted on domestic payment networks despite being international funds transfers. For example, entities or individuals seeking to move funds outside of exchange controls such as $50,000 in China are often matched by laundering gangs with funds from a criminal origin (drugs, scams etc) to be moved into China. Once the criminal funds are in China they are “washed into goods” such as electronics for export and sale. Josh highlighted the value of bringing together transaction screening with adverse media and other data to get a complete picture of risk. CRS (Currency Reporting Standard) data can also add value to a balanced screening approach and Josh noted that “one of the things that clients don’t do well is looking back at client CRS data”. He forecast that there will be additional scrutiny of foreign exchange transactions in future.

Adding to those thoughts, Simon McClive raised the importance of native multi-language and multi-script screening capabilities in detecting international money laundering threats, including the need for solutions that operate across dialects and scripts, and deal with issues such as nicknames and aliases. 

How do you get multi-script screening right?

Expanding on his previous points, Simon McClive suggested that firms should focus on the risk-based approach when implementing a multi-script screening solution. In practice, firms must consider how they can refine their adverse media searches in ways that provide value: for example, is it useful to screen in a manner that surfaces Latin American risk, when searching for Asian Pacific entities of interest? Firms should instead seek to balance their screening solutions in a way that provides meaningful, relevant data. 

Josh Heiliczer noted that firms can also test their screening solutions based on certain risk perspectives. For example, a compliance team might take into account regional factors such as the presence of clients from a specific region of China, that might prompt a change in screening parameters in the future. Crucially, firms should set out their risk appetite and screening approach, and calibrate accordingly.

What is the role of AI in client screening, and how can people use it successfully?

Simon McClive noted that firms must be able to adapt to the changing capabilities of AI technology. For example, while generative AI is theoretically capable of pulling coherent information from vast amounts of unstructured data, its output is only ever going to be a probabilistic response, based on its predictive algorithm. Similarly, generative AI model responses are often inaccurate, biased, or fabricated – which limits the technology’s application in regulatory compliance contexts and means that firms must be aware of its risks. 

With that in mind, Simon noted Ripjar’s use of generative AI as a fast, accurate means to summarise customer risk data and present a concise overview – in turn, supporting quick, accurate analyst decisions, and setting out the provenance of each claim within the summary. He stressed the importance of being able to explain the responses that AI tools generate to authorities and regulators, so that the results can be used in investigatory contexts. 

Andrew Chow also raised the importance of explainability, noting that regulators typically don’t understand the probabilistic approach to customer data. Josh Heiliczer characterised the explainability problem as “significantly difficult” – and noted that firms might ultimately have to go through the “very complicated process” of understanding their data sets in order to be able to use them in regulatory actions.   

Building on those sentiments, Simon McClive suggested that it might be the responsibility of vendors to “lift the lid” on the AI space as a way to promote safe use of the technology. AI innovation is moving rapidly, and firms might be able to avoid some challenges and pitfalls by putting certain controls in place sooner rather than later. Simon remarked that, while AI is currently very useful at showing analysts what they should care about in a given data set, compliance decisions are ultimately still made by human compliance employees. Ripjar’s latest experiments highlight the ways in which new technology is increasingly capable of automating decision-making as part of a process that is likely initially validated by analysts. 

What are the big challenges for AI in adverse media screening?

Simon McClive listed the reliability of adverse media sources as a critical challenge for AI models – and warned specifically about the increasing volume of content created by generative AI models. With this in mind, firms need to be much more discerning about the sources they use as inputs for their screening solutions, and consider how far they trust that content. 

Josh Heiliczer stressed that firms need a way of effectively identifying entities within adverse media sources as a way to manage large volumes of false positive alerts. He emphasised the need for both high quality internal and external data coverage as a means to improve those false positive rates. Expanding on the question of quality that Josh raised, Andrew Chow noted the importance of adding context to certain critical data points as a way to facilitate more effective risk-based decision making. 

Presentation: AI Risk Profiles 

The summit also included a presentation on AI Risk Profile technology: an innovative addition to Ripjar’s Labyrinth Screening solution that enhances the depth and detail of risk data, and helps firms make stronger compliance decisions.

Why do we need AI Risk Profiles?

Opening the presentation, Gabriel Hopkins highlighted a number of issues and difficulties related to adverse media screening. He started by echoing the panellists’ earlier warnings about the challenge of false positive alerts – which can make finding true risk like searching for a needle in a haystack. Gabriel also pointed to the need to obtain “the right data” on subject entities, uncovering not just financial risk but, where demanded, other types of risk (such as ESG), without becoming overwhelmed with false positive hits in the process. 

At a global scale, regulators are also beginning to expect more systematic adverse media checks. Jurisdictions like Singapore and the EU already have adverse media screening requirements in place for banks and other institutions, while the US and Canada are not far behind. International AML organisations are helping to build that regulatory momentum, with the Wolfsberg Group addressing adverse media screening specifically in its 2022 Negative News FAQs

As the adverse media landscape shifts, firms will need to integrate solutions capable of matching criminal threats, and satisfying regulatory responsibilities. 

How do AI Risk Profiles work?

AI Risk Profiles offer firms a way of surfacing risk on entities quickly and effectively – both in terms of structured data from sanctions, PEPs and watchlists, and from unstructured data in the form of news articles. Integrating machine learning algorithms, AI Risk Profiles technology is capable of extracting only the most relevant risk data for a given entity, across 26 languages, even selecting the more important and recent news stories to present a comprehensive up-to-date picture of risk. 

Once collected, the data is presented as part of a unique entity profile. The latest addition to AI Risk Profiles – about to be launched in beta – will see a short, large language model (LLM) generated summary of risk (including citations) added to screening responses. The LLM-generated summary will provide a clear, concise, but comprehensive overview of the associated risks, complete with links to the relevant news stories to ensure the explainability of that information. 

AI Risk Profiles in Action

The presentation included demonstrations of profiles for a number of Singaporeans involved in recent money laundering scandals. In one example, the AI Risk Profiles surfaced articles as far back as February 2019, highlighting the risk well before the subject was charged and before a watchlist entry was produced in August 2022. 

Ripjar’s Labyrinth Screening draws on around 6 billion news articles from multiple premium providers and, based on that content, identifies the important stories that contain information relevant to subject entities. With so much data to sort through, AI Risk Profiles works to cluster the relevant information, separating out individual entities (with similar or matched names, for example) in order to simplify analyst review. Relevant data points are assigned to specific profiles in order to add depth and detail, and build a clearer picture of risk.   

The demonstration included an example search for the name “David Cameron”. Using AI Risk Profiles technology, firms can utilise rich profiles for entities with a specific name (in this case David Cameron), where searches might previously have been overwhelmed with stories about the UK’s ex-Prime Minister. In the demonstration example, AI Risk Profiles used contextual information to build a profile for a convicted UK drug dealer named David Cameron, surfacing contextual data such as the subject’s birthdate, his place of residence, his brother, and the name of his convicting judge. By contrast, the profile for the UK Prime Minister included stories about politics, association with current Prime Minister Rishi Sunak, involvement in the Greensill scandal, and so on. 

In practice, should a firm deal with a customer named “David Cameron”, AI Risk Profiles would be capable of generating a series of relevant profiles, built out with contextual information, with the risk-relevant stories clearly surfaced. 

The Advantage of AI Risk Profiles

AI Risk Profiles help firms conduct their adverse media screening process with enhanced speed, accuracy, and confidence. In a real world case study, a bank set out to identify 75 confirmed identities, and using AI Risk Profiles, managed to massively improve its screening review process. Historically the bank would have looked at around 82,000 articles and would have identified 85% of the expected matches as part of their screening process. With AI Risk Profiles they had to review only 685 profiles and surfaced 90% of the expected matches. Elsewhere, a US investment bank integrated AI Risk Profiles as part of its onboarding process, reducing onboarding time from around 14 minutes to around 3 minutes. 

In future, and as generative AI evolves as a technology, it may be possible to take AI Risk Profiles further, having the platform make suggestions about how risk decisions might be made – based on the information available on subject entities. 


To learn more about Ripjar’s AI-powered adverse media screening technology, get in touch today

Subscribe to Newsletter

Ask us for a demonstration that will transform your data intelligence strategy.