Security & Compliance

FCA and AI: What Compliance Officers Need to Know

|10 min read

The Financial Conduct Authority is not waiting for dedicated AI legislation before setting expectations. Through a combination of discussion papers, feedback statements, and the application of existing regulatory frameworks, the FCA has made clear that firms using AI must demonstrate robust governance, transparency, and accountability. For compliance officers, the challenge is translating these evolving expectations into practical operational frameworks -particularly as AI adoption accelerates across UK financial services.

This article provides a practical overview of the FCA's current position on AI, the key regulatory frameworks that apply, and a step-by-step compliance framework that compliance officers can implement. Whether your firm is exploring AI for the first time or already deploying models in production, understanding the regulatory landscape is essential.

The FCA's Evolving Stance on AI

The FCA first set out its thinking on AI in Discussion Paper DP5/22, "Artificial Intelligence and Machine Learning," published in October 2022. This paper explored the benefits and risks of AI in financial services and invited industry feedback on a range of governance questions. The subsequent Feedback Statement FS2/23, published in 2023, confirmed the FCA's direction of travel: it intends to regulate AI primarily through existing frameworks rather than creating entirely new rules.

This approach has significant implications. It means the FCA expects firms to apply the Senior Managers and Certification Regime (SM&CR), Consumer Duty, operational resilience requirements, and existing systems and controls rules to their AI activities. There is no "AI exemption" -if you are using AI to serve customers, manage risk, or make business decisions, the same regulatory standards apply as for any other operational process.

The FCA has also signalled that it views AI risk through the lens of outcomes. It is less concerned with the specific technology and more concerned with whether the outcomes for consumers and markets are fair, transparent, and well-governed. This outcome-focused approach aligns directly with the Consumer Duty framework.

Consumer Duty and AI

The Consumer Duty, which came into full force in July 2023 for open products and services (and July 2024 for closed products), is arguably the most significant regulatory framework affecting AI use in financial services. It requires firms to deliver good outcomes for retail customers across four areas, each of which has direct implications for AI deployment.

Products and Services

Firms must ensure that products and services are designed to meet the needs of the target market and do not cause foreseeable harm. When AI is used in product design - for example, algorithmic pricing of insurance products or AI-driven portfolio construction -the firm must demonstrate that the AI does not create products that are systematically disadvantageous to certain customer groups.

This requires ongoing monitoring of AI-driven product outcomes, not just initial testing. A pricing model that appears fair at launch may develop discriminatory patterns as it learns from new data. Firms need continuous monitoring frameworks that detect outcome drift and trigger human review when thresholds are breached.

Price and Value

The price and value outcome requires firms to ensure that the price a customer pays is reasonable relative to the benefits they receive. AI-driven dynamic pricing introduces particular risks here. If an AI model adjusts prices based on customer characteristics that correlate with protected characteristics -even without directly using those characteristics as inputs -the result could be pricing that the FCA views as delivering poor value to vulnerable customer segments.

Compliance officers should ensure that AI pricing models are subject to regular fair value assessments that specifically analyse outcomes across different customer demographics, including vulnerable customers.

Consumer Understanding

Firms must ensure that communications equip customers to make effective decisions. When AI generates customer-facing content -whether that is marketing copy, product explanations, or advice summaries -the firm is responsible for the accuracy, clarity, and appropriateness of that content. AI hallucinations in customer communications are not a technical curiosity; they are a potential Consumer Duty breach.

Any AI-generated content that reaches customers should pass through validation processes that are at least as rigorous as those applied to human-authored content. In many cases, they should be more rigorous, given the known propensity of large language models to generate plausible but incorrect information.

Consumer Support

The consumer support outcome requires that customers can access support that meets their needs. AI chatbots and automated support tools are increasingly common, but firms must ensure that these tools do not create barriers to effective support. Customers must be able to escalate to a human when the AI cannot resolve their issue. The AI must not provide incorrect guidance that leads to customer detriment. And vulnerable customers must be identified and routed appropriately -something that AI can actually help with if properly implemented.

Operational Resilience and Third-Party Risk

The FCA's operational resilience framework (PS21/3) requires firms to identify their important business services and ensure they can continue to operate within defined impact tolerances during severe but plausible disruptions. AI introduces specific operational resilience considerations that compliance officers must address.

Third-party dependency risk: If your firm relies on a public AI API for a business-critical function -say, real-time fraud detection or client onboarding -that API is a critical third-party dependency. The FCA expects firms to understand and manage the risks associated with such dependencies. What happens if the API goes down? What if the provider changes their terms or pricing? What if they suffer a data breach?

Concentration risk: The FCA has specifically flagged concentration risk in cloud and AI services. If multiple firms in a sector rely on the same small number of AI providers, a disruption at one provider could have systemic implications. Firms should consider diversification strategies and, where possible, reduce dependency on single providers.

Business continuity: Firms must have documented business continuity plans that cover AI service disruption. This means identifying which business processes depend on AI, defining manual fallback procedures for each, testing those fallback procedures regularly, and ensuring staff are trained to operate without AI when necessary.

Deploying AI within your own infrastructure -rather than depending on external APIs - significantly reduces third-party operational resilience risk. When AI runs in your private cloud environment, you control the availability, scaling, and failover mechanisms directly.

Senior Management Accountability Under SM&CR

The Senior Managers and Certification Regime makes individual senior managers personally accountable for the areas of the firm's business within their responsibility. When AI systems make or influence regulated decisions, the question of accountability becomes critical.

The FCA has indicated that AI does not change the fundamental accountability framework -a senior manager cannot delegate accountability to an algorithm. If an AI system produces an outcome that causes customer harm or breaches regulations, the relevant senior manager is accountable for the governance, oversight, and controls that should have prevented that outcome.

Practically, this means senior managers with responsibility for areas using AI need to understand, at a sufficient level, how the AI systems work and what risks they present. They do not need to understand the mathematics of transformer architectures, but they do need to understand what data the AI uses, what decisions it influences, what controls are in place, and what could go wrong. They also need documented evidence that they have engaged with these questions -the "reasonable steps" defence under SM&CR requires proactive, documented governance.

Audit Trail Requirements

The FCA expects firms to be able to explain and justify decisions that affect customers. When those decisions are influenced by AI, the firm needs an audit trail that connects the AI's input, processing, and output to the final decision. This is not just a compliance requirement -it is essential for responding to customer complaints, regulatory enquiries, and internal reviews.

A robust AI audit trail should capture the following for every material AI-assisted decision:

  • Inputs: What data was provided to the model, including the source and timestamp of that data.
  • Model version: Which specific version of the model was used, including any fine-tuning or configuration parameters.
  • Outputs: The raw output from the model before any post-processing or human modification.
  • Human review: Evidence of human review, including who reviewed the output, what modifications were made, and the rationale for the final decision.
  • Decision record: The final decision or action taken, with explicit linkage to the AI output and human review.
  • Outcome tracking: The subsequent outcome for the customer, enabling retrospective assessment of whether the AI-assisted decision produced a good outcome.

This level of audit capability is significantly easier to achieve when AI runs within your own infrastructure, where you control the logging, retention, and access mechanisms. Public AI APIs typically provide limited logging capabilities that may not meet FCA expectations.

Model Risk Management

While the FCA has not published specific model risk management rules for AI (unlike the PRA's SS1/23 for internal models at larger firms), it expects all firms to apply principles of sound model governance. For AI models used in regulated activities, this means implementing a governance framework that covers the full model lifecycle.

Validation: Before any AI model is deployed in a regulated context, it should be validated independently of the development team. Validation should assess accuracy, bias, robustness (including adversarial testing), and alignment with the intended business purpose.

Ongoing monitoring: Models can degrade over time as the data they encounter in production diverges from their training data. Firms need monitoring frameworks that track model performance, detect drift, and trigger re-validation when performance falls below defined thresholds.

Change management: Any change to a model -including fine-tuning, prompt engineering changes, or updates to the underlying foundation model -should go through a formal change management process with appropriate testing and approval.

Inventory: Firms should maintain an inventory of all AI models in use, including their purpose, the data they process, the decisions they influence, and their current governance status. This AI model inventory is analogous to the model inventory that larger firms already maintain for traditional quantitative models.

A Practical Compliance Framework

Based on the FCA's expectations and our experience working with regulated firms, here is a practical framework that compliance officers can use to govern AI adoption.

  1. Establish an AI governance committee: Create a cross-functional group including compliance, risk, technology, and business stakeholders to oversee AI strategy and deployment. This committee should report to the board and have clear terms of reference.
  2. Define an AI use policy: Document which AI use cases are permitted, which require additional approval, and which are prohibited. This policy should cover both firm-deployed AI and employee use of external AI tools (shadow AI).
  3. Map AI to SM&CR responsibilities: Ensure every AI system that influences regulated activities has a named senior manager accountable for its governance. Update statements of responsibilities where necessary.
  4. Implement a pre-deployment assessment process: Before any AI model is deployed in a regulated context, require a formal assessment covering regulatory impact, Consumer Duty implications, data protection (DPIA), operational resilience, and model validation. Our guide to GDPR compliance for AI covers the data protection assessment in detail.
  5. Build audit trail infrastructure: Implement logging and monitoring that captures the inputs, outputs, and decision context for every material AI-assisted decision. Ensure logs are tamper-proof and retained for the required period.
  6. Establish ongoing monitoring: Define KPIs and thresholds for AI model performance, including accuracy, bias metrics, and customer outcome measures. Implement automated alerting when thresholds are breached.
  7. Create a consumer outcome testing framework: Regularly assess whether AI-driven processes are delivering good outcomes across all customer segments, with particular attention to vulnerable customers. This should feed directly into your Consumer Duty annual review.
  8. Address third-party AI risk: If using third-party AI services, conduct thorough due diligence covering data security, operational resilience, concentration risk, and exit planning. Consider whether private VPC deployment would reduce third-party risk.
  9. Train staff at all levels: Ensure front-line staff understand the AI tools they use and their limitations. Ensure senior managers understand their accountability obligations. Ensure compliance staff can effectively oversee AI governance.
  10. Document everything: Maintain comprehensive records of AI governance decisions, risk assessments, monitoring outcomes, and remediation actions. When the FCA asks "how do you govern AI?" the answer should be a structured body of evidence, not a verbal explanation.

The Case for Private Infrastructure

Many of the compliance challenges outlined above are significantly easier to address when AI models run within your own private cloud infrastructure rather than through public APIs.

Audit trails: When AI runs in your environment, you control the logging at every level -network, application, and model. You can capture exactly what data entered the model and what came out, with tamper-proof logging in your own CloudTrail and CloudWatch instances.

Operational resilience: Private deployment eliminates dependency on third-party API availability. You control the scaling, failover, and disaster recovery mechanisms. There is no concentration risk from sharing infrastructure with other firms.

Data security: Client data never leaves your controlled environment. There are no third-party sub-processors to assess, no data processing agreements to negotiate, and no risk of data being retained by an external provider.

SM&CR accountability: When senior managers can point to infrastructure that the firm controls, with documented security configurations and access controls, it is much easier to demonstrate the "reasonable steps" that SM&CR requires.

Our Secure AI Platform is designed specifically for these requirements, deploying leading AI models within your private AWS environment with full compliance infrastructure built in from day one.

Looking Ahead: 2026 and Beyond

The regulatory landscape for AI in financial services is expected to evolve significantly over the next twelve to twenty-four months. Several developments are worth monitoring.

The FCA is expected to publish more detailed guidance on AI governance, potentially including specific expectations for model risk management that go beyond current principles-based guidance. The FCA has indicated it is considering thematic reviews of AI use in specific areas, including consumer credit decisioning and wealth management. Firms that have robust governance in place will be well-positioned; those that do not may face supervisory action.

The EU AI Act, which began phased implementation in 2025, will influence UK thinking even post-Brexit. While the UK is not directly subject to the Act, firms serving EU customers will need to comply, and the Act's risk-based classification framework may inform future UK regulation. Financial services AI applications are likely to fall into the "high-risk" category under the Act, triggering requirements for risk management, data governance, transparency, and human oversight.

The Bank of England and PRA are also developing their supervisory approach to AI, particularly for larger firms. The PRA's focus on model risk management (SS1/23) is likely to expand to cover AI models more explicitly, and dual-regulated firms should prepare for aligned but potentially distinct requirements from both the FCA and PRA.

Additionally, the Digital Regulation Cooperation Forum (DRCF), which brings together the FCA, ICO, CMA, and Ofcom, is developing coordinated approaches to AI oversight. This means firms should expect consistent regulatory themes around transparency, accountability, and consumer protection across multiple regulators.

"The firms that invest in AI governance now are not just managing regulatory risk -they are building the operational foundations that will allow them to adopt AI confidently and at pace as the technology evolves. Governance is not a brake on innovation; it is what makes sustainable innovation possible."

Getting Started

If you are a compliance officer grappling with AI governance, the most important step is to start with visibility. You cannot govern what you cannot see. Conduct an audit of all AI usage across your firm -including unofficial use of public AI tools by employees -and map each use case against the framework outlined above.

From there, prioritise the highest-risk use cases (those that directly affect customer outcomes or involve sensitive data) and build out your governance framework incrementally. Perfection is not required from day one, but evidence of a structured, documented approach to AI governance is what the FCA will look for.

We work with FCA-regulated firms across wealth management, insurance, and financial advice to design and implement AI governance frameworks and secure deployment infrastructure. Whether you need help with strategy, architecture, or implementation, our team understands the intersection of AI technology and financial regulation. Get in touch to discuss your firm's needs, or browse our full range of services to see how we can help.

Ready to transform your business with AI?

Book a free strategy session to discuss how Evolve AI can help your organisation harness AI safely and compliantly.

Book Strategy Session