Machine learning is increasingly being used in UK financial services, survey indicates
Machine learning is most commonly used in anti-money laundering and fraud detection as well as in customer-facing applications.

Machine learning is increasingly being used in UK financial services, according to findings of a joint survey conducted by the Bank of England (BoE) and the Financial Conduct Authority (FCA).
The survey, conducted in 2019, was sent to almost 300 firms, including banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms, with a total of 106 responses received.
The survey asked about the nature of deployment of machine learning, the business areas where it is used and the maturity of applications. It also gathered information on the technical characteristics of specific machine learning use cases. Those included how the models were tested and validated, the safeguards built into the software, the types of data and methods used, as well as considerations around benefits, risks, complexity and governance.
Two thirds of respondents report they already use machine learning (ML) in some form. The median firm uses live ML applications in two business areas, with this expected to more than double within the next three years.
The survey also shows that, in many cases, ML development has passed the initial development phase, and is entering more advanced stages of deployment. One third of ML applications are used for a considerable share of activities in a specific business area. Deployment is most advanced in the banking and insurance sectors.
Most often, machine learning is used in anti-money laundering (AML) and fraud detection as well as in customer-facing applications, such as customer services and marketing. Some firms also use ML in areas like credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.
Regulation is not seen as a barrier but some firms note the need for additional guidance on how to interpret current regulation. The most serious reported constraints are internal to firms, such as legacy IT systems and data limitations.
Firms said that ML does not necessarily create new risks, but could be an amplifier of existing ones. Such risks, for instance ML applications not working as planned, may occur if model validation and governance frameworks do not keep pace with technological developments.
Respondents use a range of safeguards to manage the risks associated with machine learning. The most common safeguards are alert systems and so-called ‘human-in-the-loop’ mechanisms. These can be useful for flagging if the model does not work as planned.
Firms validate ML applications before and after deployment. The most common validation methods are outcome-focused monitoring and testing against benchmarks. However, many firms stress that ML validation frameworks still need to evolve in line with the nature, scale and complexity of ML applications.
Firms mostly design and develop ML applications in-house. However, they sometimes rely on third-party providers for the underlying platforms and infrastructure, such as cloud computing.
The majority of users apply their existing model risk management framework to ML applications. But many highlight that these frameworks might have to evolve in line with increasing maturity and sophistication of ML techniques.
In order to spur further conversation around Machine Learning innovation, the BoE and the FCA plan to set up a public-private group to study some of the questions and technical areas covered in this report.