Bank of England and UK Financial Conduct Authority Findings on Third Survey of Artificial Intelligence and Machine Learning in UK Financial Services

A&O Shearman
Contact

A&O Shearman

The Bank of England published the findings of its third joint survey with the U.K. Financial Conduct Authority on the use of Artificial Intelligence and machine learning in financial services. The survey aims to build on existing work to further the BoE's and FCA's understanding of AI in financial services, in particular by providing ongoing insight and analysis into AI use by BoE and/or FCA-regulated firms.

Points of interest include: (i) use and adoption—75% of firms are already using AI, with a further 10% planning to use AI over the next three years; (ii) third-party exposure—a third of all AI use cases are third-party implementations; (iii) automated decision-making—55% of all AI use cases have some degree of automated decision-making, with 24% of those being semi-autonomous; (iv) understanding of AI systems—46% of respondent firms reported having only 'partial understanding' of the AI technologies they use versus 34% of firms that said they have 'complete understanding'; (v) benefits and risks of AI—the highest perceived current benefits are in data and analytical insights, anti-money laundering and combating fraud, and cybersecurity. The areas with the largest expected increase in benefits over the next three years are operational efficiency, productivity, and cost base. These findings are broadly in line with the findings from the 2022 survey. Of the top five perceived current risks, four are related to data: data privacy and protection, data quality, data security, and data bias and representativeness. The risks that are expected to increase the most over the next three years are third-party dependencies, model complexity, and embedded or 'hidden' models. Cybersecurity is rated as the highest perceived systemic risk both currently and in three years. The largest increase in systemic risk over that period is expected to be from critical third-party dependencies; (vi) constraints—the largest perceived regulatory constraint to the use of AI is data protection and privacy followed by resilience, cybersecurity, third-party rules, and the FCA's Consumer Duty; and (vii) governance and accountability—84% of firms reported having an accountable person for their AI framework. Firms use a combination of different governance frameworks, controls, and/or processes specific to AI use cases.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© A&O Shearman

Written by:

A&O Shearman
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

A&O Shearman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide