Blog > FCA AI Update 2024: Key Takeaways

2nd May 2024


FCA AI Update 2024: Key Takeaways

On 22 April 2024, the Financial Conduct Authority (FCA) published its AI Update, elaborating on its approach to the regulation of AI in UK financial markets, and on its intentions for the future. The 2024 Update follows the UK government’s publication of its “pro innovation” policy proposals for AI regulation, and its principles-based AI guidance for UK regulators. In its introduction to its 2024 paper, the FCA underlined the government’s regulatory objectives, emphasising the importance of balancing new technology implementation, innovation, and competitiveness with the ongoing safety of customers and the UK’s financial system. 

As the AI regulatory landscape continues to evolve, firms in the UK and beyond must understand the compliance risks that they face, and how regulator expectations are changing. Stay on top of the latest FCA AI developments with these key takeaways from the 2024 Update.

FCA AI Regulation So Far

The AI Update included a summary of the FCA’s work on AI regulation to date, mentioning collaborations with other regulatory bodies, and the publication of the following documents in conjunction with the Bank of England: 

The FCA’s work to date has focused on the benefits and risks of AI in relation to its objective to protect customers (and UK market integrity) from financial crime, and on how existing regulatory requirements apply to the use of AI technology in financial services. As the application of AI expands, the FCA intends to “monitor the potential macro effects” on specific areas of financial concern, including cybersecurity and data security.

To that end, and in order to promote “beneficial innovation” with AI, the FCA has deployed a number of testing initiatives, including the Regulatory Sandbox, the Digital Sandbox, and Techsprints events.

The FCA’s Approach to AI in 2024 and Beyond

In setting out its current approach to regulating AI in the UK financial markets, the FCA emphasised the importance of “safe and responsible use” as a foundation for beneficial innovation. According to the Update, that approach has delivered benefits for UK consumers and businesses alike, including better products and services, more protection for consumers, and broader opportunities for start-ups. 

The advance of technological capabilities has also helped firms better address financial crime risks. With that in mind, the FCA AI Update included the following key fincrime points.

Risk Mitigation

The FCA pointed out that it does not “mandate or prohibit” specific technologies, but works to “identify and mitigate risk” in its development and application of AI regulations. It aims to apply a principle of proportionality to discharging its functions, considering the impact of new restrictions against their expected benefits. The Update stated that many AI risks “are not necessarily unique to AI Itself” and so can be managed within existing UK regulatory frameworks. The FCA set out the guidance it considers most relevant to mitigating AI risk in the UK in its AI DP publication (see above).

AI as a Fincrime Tool

The 2024 Update sets out the FCA’s intention to become a “digital and data led regulator”. With that in mind, it is actively exploring the development and use of AI-supported tools with fincrime applications, including:

Synthetic data: The FCA has developed an in-house synthetic data tool for use-testing sanctions screening solutions – and which has “transformed”  its ability to assess the effectiveness of firms’ internal sanctions screening systems. The FCA describes synthetic data as having the potential to support beneficial innovation and to “address important financial services public policy issues”, including fraud and financial crime. It has set up a Synthetic Data Expert Group to provide insights into the technology.

Machine learning: Advances in AI have facilitated more sophisticated machine learning systems, capable of positively identifying, reviewing, and managing financial criminal activity. The FCA has been actively using machine learning systems to fight scam websites, and has also been deploying the technology as part of advanced analytics strategies to detect other forms of market abuse. Going forward, the regulator wants to support the development of machine learning-enabled surveillance solutions for markets, trained on extensive FCA datasets, and tested on its Digital Sandbox platform.

Complex market abuse: The analytic possibilities of AI are helping the FCA “identify more complex types of market abuse”, including difficult-to-detect methodologies such as cross-market manipulation. In addition to AI-supported detection systems picking up instances of market abuse more accurately, the FCA believes that the development and integration of anomaly detection could fundamentally transform the way the regulator carries out its surveillance.

The Next 12 Months

The FCA intends to maintain its focus on proportionality over the next 12 months, balancing risk with beneficial innovation in the application of new and existing regulations. For that strategy to be successful, the FCA states that it needs to work from a “strong empirical basis”, developing a greater understanding of “how AI is being deployed in UK financial markets” so that it can promote innovation and respond quickly to specific emerging risks. The FCA emphasised the importance of collaboration with regulated firms in achieving that objective, and in creating “consensus on best practice and potential future regulatory work”. 

With such a strong emphasis on fincrime, it’s clear that the FCA is actively exploring the impact of AI in compliance contexts – not least screening and monitoring. That being the case, it’s more important than ever for firms to understand the potential of AI-supported compliance and how the technology can be deployed effectively as part of internal screening solutions. 

Harness AI Safely with Labyrinth Screening

Ripjar’s Labyrinth Screening platform is designed to help translate AI potential into meaningful compliance results quickly and easily. Built on next-generation machine learning technology, Labyrinth is capable of screening thousands of global data sources in seconds, including sanctions lists, watchlists, PEP lists, and adverse media. 

Labyrinth is enhanced by cutting-edge AI innovation, including AI Risk Profiles which extract and review only the most relevant customer risk data, and AI Summaries, which harness the power of generative AI (GenAI) to build-out clear, concise summaries of that data, and dramatically reduce risk assessment times. New Compliance Copilot takes this further by using GenAI to support analysts with handling alerts and assessments, providing fast, unbiased decisions and recommendations.


Discover how Labyrinth Screening uses AI to support your AML compliance

Read more

Subscribe to Newsletter

Ask us for a demonstration that will transform your data intelligence strategy.