AI Macro · · 7 min read

Bank of England Launches AI Roundtables as UK Plots Light-Touch Regulatory Path

Central bank convenes financial firms to identify barriers to AI adoption, signalling preference for principles-based oversight over prescriptive rules.

The Bank of England held roundtable discussions with financial services firms in late 2025 to identify barriers to artificial intelligence adoption, publishing findings in February 2026 that reveal broad industry support for the UK’s principles-based regulatory approach.

The Bank held meetings with representatives from regulated firms on responsible adoption of AI and machine learning to better understand constraints facing the sector. Each roundtable was held with representatives from a different Prudential Regulatory Authority-regulated sector: challenger banks and UK-focused larger banks; global systemically important banks; and insurers, with observers from the Financial Conduct Authority and HM Treasury also present, according to Bank of England minutes.

AI Adoption in UK Finance
Firms currently using AI
75%
Planning adoption (3 years)
10%
Third-party implementations
33%

The initiative comes as 75% of firms were found to be currently using AI, with another 10% having plans to do so within the next three years, representing a sharp increase from the previous 2022 survey (up from 58% and 14% respectively), according to the latest BoE-FCA survey reported by techUK.

Industry Backs Existing Framework

Across all three roundtables, participants from regulated firms expressed support for the PRA’s regulatory framework as it related to AI, noting that the PRA’s principles-based, outcomes-based policy and supervisory statements gave firms sufficient space to innovate within clear regulatory guardrails. Supervisory Statement 1/23 on Model Risk Management in particular was noted by several participants as pragmatic in enabling responsible AI adoption, according to Bank of England documents.

Most participants did not see the need yet for detailed AI-specific regulatory guidance or rules, and most couldn’t see a case for a Bank or PRA AI sandbox at this time. Firms indicated the FCA’s Supercharged Sandbox and AI Live Testing initiatives were providing sufficient testing environments.

Context

The UK has deliberately eschewed the EU’s prescriptive AI Act in favour of empowering existing sectoral regulators to interpret cross-cutting principles. The approach centres on five core tenets: safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. Critics argue this creates uncertainty; proponents claim it preserves flexibility for fast-moving technology.

Third-Party Dependencies Emerge as Key Friction Point

Despite the positive reception of existing frameworks, firms identified specific operational barriers. Second-line risk functions continue to approach the use of AI with caution, which may delay AI deployment pipelines. More significantly, procurement and contract negotiations with third-party AI providers were slowed by inconsistent familiarity with regulated firms’ compliance requirements, creating an opportunity cost in the meantime, according to Bank of England minutes.

Several participants noted that the Bank could explore convening financial and technology firms to agree minimum standards for third party AI providers to the regulated financial sector. This suggestion reflects growing concern about vendor lock-in: as AI models become embedded in agentic systems throughout their firm’s core business processes, substituting between AI providers may become more challenging.

Data protection laws – along with emerging data sovereignty regimes in other jurisdictions – were a challenge to deploying and scaling AI use cases, participants noted. The largest perceived regulatory constraint to the use of AI is data protection and privacy followed by resilience, cybersecurity, third-party rules, and the FCA’s Consumer Duty, according to A&O Shearman analysis of the survey data.

Key Barriers Identified
  • Cautious second-line risk functions delaying deployment pipelines
  • Third-party vendor negotiations slowed by compliance uncertainty
  • Data protection laws constraining cross-border AI scaling
  • Vendor lock-in risks as AI embeds in core business processes

UK Diverges from EU’s Prescriptive Model

The roundtable findings reinforce the UK’s commitment to a regulatory model that contrasts sharply with Brussels. While the EU AI Act is a landmark Regulation by the European Union reshaping the governance of artificial intelligence, emphasising compliance, transparency, and ethical practices, the UK relies on existing regulators adapting principles to sector-specific risks, according to UK Finance.

The EU AI Act is a binding regulation with specific legal obligations, risk classifications, compliance timelines, and penalties of up to €35 million or 7% of global turnover. The UK framework is principles-based and relies on existing regulators to issue guidance and enforce standards within their sectors, explains LittleData.

This divergence carries strategic implications. Further work is underway to assess whether AI-specific regulatory guidance would be beneficial, the BoE stated in an October 2025 publication on its approach to innovation, according to Regulation Tomorrow. Meanwhile, the Financial Conduct Authority must provide the financial services sector with greater clarity on the application of existing rules to the use of AI. By the end of 2026, the Financial Conduct Authority should publish comprehensive, practical guidance for firms on the application of existing consumer protection rules to their use of AI, and accountability and the level of assurance expected from senior managers under the Senior Managers and Certification Regime for harm caused through the use of AI, recommended a House of Commons Treasury Committee report published January 2026, according to Parliament.

Central Banks Globally Grapple with AI Oversight

The BoE’s engagement reflects a broader trend among Central Banks navigating AI’s implications for financial stability. The FSB’s 2024 AI report identified several vulnerabilities, including third-party dependencies, market correlations, cyber risks, and challenges in model risk and governance, which may have implications for financial stability, according to a Financial Stability Board report on monitoring AI adoption.

The Artificial Intelligence Consortium is being established to provide a platform for public-private engagement to gather input from stakeholders on the capabilities, development, deployment and use of AI in UK financial services, the BoE announced in an April 2025 financial stability report, according to Bank of England documents. The AI Consortium, jointly launched with the FCA in May 2025, provides ongoing industry dialogue beyond the one-off roundtables.

Internationally, approaches vary. While the benefits of AI are undeniable, its integration into central banking presents challenges, particularly in monetary policy and financial stability, noted a Bank for International Settlements official in October 2025 remarks, according to BIS. AI adoption remains limited in most emerging market and developing economies. Many still lack the capacity to monitor AI risks and face skills gaps and weak data and IT infrastructure, which make it harder to address challenges such as cybersecurity, data privacy and sovereignty, reliance on third-party service providers, and model governance, according to CEPR analysis.

What to Watch

Three developments will determine whether the UK’s light-touch approach succeeds or requires course correction. First, whether the BoE acts on industry requests to convene financial and technology firms to establish minimum standards for third-party AI providers—a move that would represent soft harmonisation without formal rulemaking. Second, whether the FCA delivers the comprehensive guidance on consumer protection and senior manager accountability demanded by Parliament by year-end 2026. Third, whether HM Treasury finally designates major AI and cloud providers as critical third parties under the regime established in 2023—a power it has yet to exercise despite growing concentration risk.

The contrast with the EU will sharpen as the AI Act’s high-risk system requirements take full effect. UK financial institutions operating cross-border face dual compliance burdens: principles-based interpretation at home, prescriptive conformity assessments abroad. This regulatory arbitrage could either attract AI innovation to London or fragment operational models. For now, industry satisfaction with the status quo gives UK authorities breathing room—but only if they can resolve third-party governance gaps before they metastasise into systemic dependencies.