Boards are increasingly being called upon to take ownership of technology risk oversight as a strategic imperative, reinforced by the updated UK Corporate Governance Code and the new Cyber Governance Code of Practice. In 2026, staying ahead of technology risks and regulatory shifts isn’t optional - it’s essential. Are you clear on where to focus to keep your organisation in control? 

 

Evolving cyber security threats | Technology resilience and incident response | Cloud governance and security | Generative and autonomous AI | Evolving digital regulation and compliance | Critical third-party supply chain risk | Transformation programmes | Data governance | Zero trust security | Deepfakes and disinformation threats

Evolving cyber security threats

Cyber security risk is an ever-consistent focus, having once again topped the Institute of Internal Auditors’ (IIA) 2026 ‘Risk In Focus’ study as the top risk organisations face.  

The risk came to life throughout 2025, with several very high-profile cyber-attacks causing major disruption across UK businesses. These recent examples have shown the severe impact a successful attack can have, with day-to-day operations taking several months to fully recover in some cases, while customers of impacted businesses may go elsewhere if they cannot place orders.  

Ransomware continues to be a key cyber threat that many organisations struggle to respond to, with traditional recovery techniques rendered ineffective by the rapidly spreading encryption deployed by effective malware.  

The impacts of a ransomware attack are also felt throughout the supply chain, reflecting the highly integrated nature of modern operations. Attacks can propagate from one business to another or bring financial disruption to a business when a key customer is brought to a halt.  

Almost every aspect of business is being disrupted by AI, and cyber security is no exception. As the integration of AI into security tooling is accelerating the assessments of threats, this fast-track capability is also being used by threat actors. Many businesses aren’t only seeing an increasing pace and quantity of cyber-attacks, but a greatly increased sophistication as the barrier to entry for more advanced techniques is reduced by AI.  

The increasingly complex combination of on-premise solutions, Cloud services, and software-as-a-service (SaaS) products makes controls over user identity difficult to manage. This often includes access by third parties who are typically outside the direct influence of other mitigating controls, increasing the associated risk.  

In 2025, the IIA published it's Topical requirements for cyber security, defining the mandatory requirements that must be considered as part of the annual audit planning process and when scoping any cyber security assurance activities. These take effect from February 2026.

Internal audit and risk functions have a growing range of cyber security threats to address to protect data, avoid business disruption, and protect against ransomware. 

The key actions to consider for 2026 include:  

  • receiving regular assurance from internal audits on cyber transformation plans and focus from risk functions to ensure benefits are delivered, while also managing risks of tooling misalignment, rising costs, and stifling the agility of the wider business with prohibitive controls. The very public nature of recent cyber-attacks has led to many organisations accelerating their cyber transformation plans to enhance their defences
  • focussing on Identity access management (IAM) as a key area, with cases of SIM-swapping attacks undermining traditional two factor authentication controls, and social engineering of IT helpdesks giving attackers highly privileged access. These are topics internal audit and risk/control functions can add real value to if the right approach is taken 
  • focussing on the assessments and assurance management undertaken for attackers compromising organisations via third parties, And identifying those third parties with high levels of IT access and gaining adequate assurance over their control environment is no longer a luxury and is becoming a necessary step to manage this risk
  • Where internal audit functions subscribe to the IIA Global Standards, they must demonstrate that they have fully considered the topical requirements for cyber security in defining their audit universe and 2026 audit plan. This will impact audit planning over several years, as it covers a broad range of cyber risk areas, from governance and risk management to talent management and more typical technical controls over networks and endpoints.

In October 2025, the Bank of England, PRA and FCA jointly published 'Effective practices: Cyber response and recovery capabilities' outlining observed practices across systemic firms and market infrastructures. This outlined the need for defining impact tolerances beyond outage duration (for example, transaction values /volumes and numbers of end-users); implementing alternative solutions and immutable backups; ensuring robust crisis communication plans; and preparing for tertiary site recovery. It also stresses managing third-party resilience to equivalent standards and promoting collective industry collaboration to enhance sector-wide cyber resilience. 

Regulators also expect firms to demonstrate board-level engagement and scenario testing for severe but plausible cyber events. The FCA has indicated that failure to meet these expectations could lead to enforcement action or supervisory intervention from 2026.

Technology resilience and incident response

Technology resilience and incident response plans are essential for maintaining business operations in today’s environment of constant disruption. Organisations must be equipped to handle a wide range of disruptions – from cyber-attacks and system failures to third-party outages and human error – and recover from these quickly with minimal impact.  

Resilience isn’t just about IT infrastructure; it requires coordinated planning across people, processes, data, facilities and systems. With increasing reliance on cloud platforms, interconnected applications, and external vendors, a single point of failure can trigger widespread disruption. A well-defined incident response capability ensures rapid detection, containment, and recovery, while also supporting clear ownership, effective communication, and post-incident learning.  

As regulatory expectations and customer demands grow, businesses must move beyond reactive approaches and embed resilience into their operating model. In this context, technology resilience becomes a strategic enabler, protecting reputation, ensuring compliance, and securing long-term operational stability. 

Over the past year, technology resilience and incident response have come under increased scrutiny following a series of high-impact disruptions. While cyber-attacks like those affecting Jaguar Land Rover and Marks & Spencer drew attention, equally significant were non-cyber incidents such as the Azure outage and the CrowdStrike update failure. These events exposed how tightly integrated systems can trigger cascading failures across industries.  

Common issues included ineffective communication, absence of tested and easily accessible response playbooks, and a lack of a clearly defined baseline of critical systems and data, many of which weren’t safeguarded by immutable backups (backups that can’t be altered or deleted by an attacker). As a result, numerous organisations impacted in 2025 faced slow containment and unclear accountability, significantly amplifying operational disruption and reputational damage.  

The growing complexity of cloud and AI-driven environments has made traditional resilience models inadequate as these tended to be designed for static, centralised systems and struggle to handle the dynamic, distributed, and highly interdependent nature of modern technology architectures. In response, businesses are now prioritising real-time monitoring, cross-functional coordination, and scenario-based testing. Regulators and boards are demanding more robust continuity planning that accounts for dependencies across systems, vendors, and people.

To effectively provide assurance over technology resilience and incident response arrangements, risk and internal audit functions must transition from traditional reactive reviews to a proactive assurance model. This requires anticipating disruptions, validating readiness, and embedding resilience across the enterprise. 

Key priorities should include: 

  • assessing the organisation’s end to end recovery capabilities against standards such as ISO 22301 or NIST SP 800-61 and ensuring controls are aligned with such business continuity standards 
  • determining whether the business has a comprehensive baseline inventory of all critical systems and data, and their associated resilience requirements. As well as whether immutable backup solutions (eg, using Write-Once-Read-Many storage media or air-gapped backups) with regular integrity and restoration testing are in place 
  • assessing the plans and arrangements the organisation has in place to respond to a disruption, including the accessibility of these plans; the level of cross-functional coordination between IT, operations, and business teams; and how these are tested
  • validating the effectiveness of security information and event management (SIEM) and extended detection and response (XDR) tools for continuous monitoring
  • integrating resilience into enterprise risk frameworks and transformation programmes, while promoting preparedness across business units to foster ownership and proactive planning
  • adopting AI-driven analytics for real-time risk detection, implementing automated playbooks for rapid containment (while maintaining a physical copy), and using digital twins to simulate disruptions and recovery scenarios. 

Technology resilience and incident response remain priorities for financial services firms in 2026 amid rising regulatory expectations and scrutiny of third-party and cyber risk as stated by the PRA: “Effective practices: Cyber response and recovery capabilities”. 
Firms must continue embedding Operational Resilience within business -as-usual activities, such as keeping Important Business Services (IBS) within impact tolerances under FCA (PS21/3) and PRA (SS1/21), validated through severe but plausible scenario testing. Internal audit should review IBS mapping, resilience testing, vulnerability management and incident response plans for completeness, covering playbooks, communication protocols, recovery capabilities, and alignment with the BoE/FCA/PRA expectations.  Furthermore, firms should ensure that mechanisms are in place to alert the regulatory authorities of any reportable incidents in line with FCA CP24/28 (FCA Handbook - SYSC 15A Operational resilience SS1/21 ‘Operational resilience: Impact tolerances for important business services’). 
Third-party resilience should be assessed against the UK Critical Third-party regime (CP26/23 - Operational resilience: Critical third parties to the UK financial sector | Bank of England) and EU DORA (L_202401774EN.000101.fmx.xml), where applicable. The expectation is for audit teams to verify detection, classification, regulatory reporting, governance, board oversight, and embedding of risk culture and training.

Cloud governance and security

Research has shown that organisations’ spending on cloud services grew beyond expectations in 2025. Spending on cloud infrastructure services grew by 28% year-over-year in the third quarter of 2025 alone. AWS remains the most popular cloud service provider (CSP), followed by Microsoft Azure and Google Cloud Platform. 

While Cloud infrastructure providers offer similar services, they aren’t identical. It’s important for organisations to understand the ‘shared responsibility model’ when it comes to managing their cloud infrastructure in partnership with their CSPs. Misunderstanding these, and mismanagement and misconfigurations of their cloud services, can lead to security weaknesses, a lack of operational resilience, not complying with industry standards or regulatory requirements, and escalating costs. 

It’s key for organisations to establish cloud governance frameworks, setting out how all aspects of their cloud infrastructure should be managed, configured and monitored. A key component of this is ensuring that their use of cloud delivers value for money for their business. The use of FinOps frameworks can help prevent spiralling costs resulting from unused, orphaned, or over-provisioned cloud resources. 

The landscape became significantly more complex throughout 2025. With most organisations now deploying a mix of AWS, Azure, and GCP, we’re seeing a proliferation of multi-cloud environments. This created a number of governance ‘seams’ which in turn are leading to organisations developing a number of security blind spots when it comes to the cloud environments. Identity management, data protection, and security controls are often inconsistently deployed, configured, and managed across multiple cloud platforms. 

Over the last year, there has also been a rush to deploy Generative AI solutions – many of which are hosted on cloud platforms – introducing significant new data governance challenges and unpredictable cost risks. 

  • In the Financial Services sector, new regulations (like the UK's SS6/24 and the EU's DORA) are now in force. These make firms legally accountable for the operational resilience of their Critical Third Parties (CTPs) – primarily their cloud providers. 

Against the backdrop of challenging economic conditions, the need to deliver value for money has never been higher. This has elevated FinOps from a simple IT task to a C-suite priority.

Internal audit and risk teams must shift their focus from simply reviewing a cloud provider’s SOC 2 report to result auditing their organisation's own responsibilities within the ‘shared responsibility model’. 

Key areas of focus should include: 

  • assessing the consistency of security, identity management, and data protection controls across all CSPs they work with, with particular attention being given to points where platforms intersect (‘the seams’). 
  • moving beyond traditional assessments of IT disaster recovery capabilities, rather focusing the operational resilience measures in place to mitigate “severe but plausible” scenarios – such as the failure of an entire Cloud service, CSP, or cloud region – to ensure the business keeps running when it matters most. 
  • assessing the data governance and cost-containment controls specifically for new Gen AI projects to ensure proprietary data is protected, and usage is monitored
  • integrating FinOps into audit plans, e.g., evaluate the effectiveness of cost governance, such as resource tagging, budgeting, and controls for de-provisioning unused assets, to ensure the cloud spend delivers actual business value. 

UK regulators have made cloud a supervisory priority. The PRA and FCA have designated major cloud providers as critical third parties (CTPs) under a new regime, meaning CSPs like AWS, Microsoft, and Google may face direct resilience testing when serving UK banks. Firms must notify regulators of material cloud arrangements and maintain exit plans. 

Under EU DORA, financial entities must ensure cloud outsourcers meet strict security and continuity standards. Institutions should treat cloud services with the same rigour as in-house IT – conducting robust risk assessments, ongoing monitoring, and securing contractual rights for audit and assurance. 

Organisations can exercise their ‘right to audit’ clause for improved oversight for CSPs.  However, these reviews can be complex, costly, and many organisations may not have the necessary capacity or specialist cloud assurance capabilities to undertake this work. As a result, many financial services organisations are considering a pooled assurance approach. This allows multiple organisations to gain assurance over a CSP, drawing on specialist resources to reduce costs and gain a high degree of comfort.

Generative and autonomous AI

Generative and Agentic AI is reshaping how organisations innovate, operate, and compete. Its ability to create content, automate decisions, and enhance productivity presents transformative opportunities – from personalised customer experiences to accelerated product development.  

These benefits, however, come with significant risks. Organisations must assess their readiness to deploy AI responsibly, ensuring robust governance, data quality, and workforce capability. Ethical use is critical: bias, misinformation, and lack of transparency can erode trust and invite regulatory scrutiny.  

Integrating AI operationally presents significant challenges, particularly in aligning with legacy systems, established controls, and existing risk management frameworks. Achieving success requires embedding AI into the core business strategy, supported by clearly defined roles and accountability structures. Ongoing performance monitoring and risk oversight are essential to ensure reliability and compliance. Moreover, fostering a culture that prioritises ethical use, transparency, and responsible innovation is critical to sustaining trust and maximising long-term value from AI initiatives. 

The risks around Generative and Agentic AI have intensified as adoption accelerates across industries. A recent MIT report [1] revealed that 95% of Generative AI pilots fail to deliver a measurable business impact, largely due to poor integration, lack of contextual learning, and misaligned workflows. Despite billions invested, most organisations remain stuck in pilot phases, unable to scale AI effectively. 

Meanwhile, Agentic AI – autonomous systems capable of reasoning and acting independently – is rolling out faster than governance frameworks can keep up. These agents introduce new risks: from data breaches and hallucinations to unintended decision-making without human oversight. 

To mitigate these risks, organisations must evolve their risk programmes, embed ethical guardrails, and ensure operational readiness. Without this, the promise of AI may remain unrealised – and costly.  

[1] MIT NANDA, Aditya Challapally, Chris Pease, Ramesh Raskar, and Pradyumna Chari, The GenAI Divide: State of AI in Business 2025, July 2025. 

Internal audit and risk teams should update risk frameworks and continually assess AI governance.

 Areas of focus should include: 

  • assessing organisational readiness for AI adoption, including governance, skills, and data maturity
  • reviewing AI use cases for value delivery, ethical alignment, and risk exposure
  • evaluating third-party AI tools for transparency, reliability, and contractual safeguards
  • using black box auditing techniques and tools to provide assurance over specific AI use cases and controls around data inputs, model outputs, and decision-making autonomy
  • monitoring AI deployment pace to ensure risk frameworks scale with adoption
  • staying informed on regulatory developments and emerging standards for AI assurance

UK regulators are increasingly focused on AI. The PRA’s model risk management principles cover AI/ML models, requiring strong validation and oversight of algorithmic decision systems. The FCA has signalled that customer-facing AI (eg, robo-advisors, credit scoring) falls under conduct rules, demanding transparency and fairness. 

While no AI-specific regulation exists yet, the Senior Managers Regime implies accountability for governance failures. Firms are appointing AI governance leads and integrating AI programmes into risk frameworks, treating AI models with the same rigour as high-risk financial models for validation and auditability.

Evolving digital regulation and compliance

Evolving digital regulation and compliance is a critical focus for UK and EU organisations as AI adoption surges. In 2025, the EU AI Act and the UK’s Data (Use and Access) Act are reshaping how businesses manage AI and personal data. The EU AI Act introduces a tiered, risk-based framework with strict obligations for high-risk and general-purpose AI systems, including transparency, data provenance, and human oversight. UK regulation, while more principles-based, still demands accountability, fairness, and lawful data use under updated ICO guidance.  

Both regimes emphasise ethical AI deployment, especially where personal data is involved. Organisations must now navigate dual compliance, balancing innovation with stringent privacy and governance standards. Failure to adapt risks incurring fines and reputational damage. Proactive engagement with these evolving rules is essential to build trust, ensure resilience, and unlock AI’s full potential.

Digital regulation and compliance risks expanded rapidly across the UK and EU in 2025, affecting sectors beyond traditional financial services. The EU AI Act and the UK’s Data (Use and Access) Act introduce stricter obligations around AI governance, personal data use, and automated decision-making, with many provisions already in force and arriving in 2026. However, proposals to soften and delay the EU AI Act requirements are currently being debated by the EU Parliament.  

This marks a shift: organisations in retail, manufacturing, healthcare, and energy now face compliance demands previously reserved for financial service-regulated industries. These include risk assessments, transparency requirements, and human oversight for AI systems, especially those using personal data.  

The pace of change is accelerating, and many businesses are unprepared. Compliance is no longer optional – it’s becoming a market access requirement, with penalties for non-compliance reaching Euro 35 million or 7% of global turnover. Organisations must act now to embed AI risk frameworks and align with evolving UK and EU standards. 

Internal audit and risk teams should strengthen compliance monitoring, update controls, and align frameworks with evolving digital regulations. 

Areas of focus should include: 

  • reviewing how the organisation is taking proactive steps to comply with proposed legal and regulatory requirements for the countries that it operates in
  • assessing readiness for upcoming regulatory requirements, and compliance with existing obligations of the EU AI Act or similar
  • auditing data flows to confirm lawful collection, processing, and retention of personal data
  • monitoring regulatory developments to stay ahead of upcoming compliance deadlines
  • updating risk frameworks to reflect digital-specific risks and regulatory expectations
  • investing in digital skills and tools to audit and assess complex technologies like AI and cloud platforms
  • engaging with legal and compliance teams to interpret and operationalise new digital regulations
  • collaborating with IT and data teams to ensure alignment on controls and compliance. 

Financial services firms must navigate overlapping tech regulations. UK banks comply with GDPR/data protection rules and PRA/FCA operational resilience rules, including senior management systems and controls (SYSC) for IT risk management. Regulators increasingly use enforcement tools for digital failings - such as Section 166 reviews for poor data governance and fines for weak cyber security under Principle 2. Global institutions also face cross-border challenges, eg, ensuring trading apps meet the EU Digital Services Act and FCA conduct standards.

Critical third-party supply chain risk

Critical third-party risk management has evolved beyond traditional vendor oversight into a high-stakes discipline centred on systemic ‘concentration risk’. This risk arises when multiple organisations, often within the same sector, become heavily reliant on a single, non-substitutable third party such as a major cloud provider like AWS or Azure. In such scenarios, a failure at one critical third-party can trigger widespread disruption across industries.

The CrowdStrike outage in 2024 underscored the far-reaching impact of fourth-party dependencies, whereas some of the cyber-attacks in 2025 caused major 'reverse' supply chain failures. In the worst case, thousands of suppliers were disrupted, pushing some to the brink of collapse and exposing the fragility of interconnected ecosystems. These events highlight a fundamental truth: outsourcing services doesn’t outsource risk. The mandate has shifted from simply managing vendor contracts to ensuring organisational resilience against the potential failure of a critical third-party. 

In 2025, critical third-party supply chain risk has intensified due to three major shifts. First, regulatory frameworks have tightened globally, mandating lifecycle-based vendor risk management and emphasising the integration of third-party risk considerations into an organisation’s operational resilience strategy.  
Second, cyber threats have evolved beyond software attacks to target Cloud providers’ physical infrastructure (eg, data centre hardware, network devices etc) and multi-tier vendor ecosystems, with recent incidents highlighting systemic vulnerabilities. For example, last year we saw attackers taking advantage of a zero-day vulnerability in SAP NetWeaver to upload malicious files and compromise ERP systems globally.  

Thirdly, the adoption of AI by critical third-party suppliers has introduced new and complex risks into companies’ supply chains. Organisations are responding by moving away from traditional, compliance-based approaches to third-party risk management toward continuous monitoring and AI-enabled governance. This transition marks a shift from reactive assurance to proactive resilience, reflecting the growing need to manage dynamic, interconnected risks in real time. 

Managing critical third-party risk today requires a proactive, integrated approach between risk management and internal audit teams, and requires collaboration with broader supply chain risk initiatives, including ESG, AML, and modern slavery compliance.  

Key areas of focus should include: 

  • reviewing and helping define the approach for identifying and categorising third parties according to their potential impact and associated risk, taking into account factors such as business continuity, cyber security threats, and data security
  • conducting concentration and systematic risk analysis by assessing reliance on dominant providers (eg, cloud platforms) and map fourth-party relationships to uncover hidden vulnerabilities
  • ensuring procurement functions are engaged and that control evaluations for critical third parties are integrated into the onboarding process
  • evolving from periodic checks to continuous, AI-enabled monitoring of critical third parties' control environments, complementing traditional methods such as service auditor reporting (eg, SOC2 reports), threat-led penetration testing, and maintaining up to date attestations and compliance requirements
  • ensuring appropriate measures are taken when ending relationships with suppliers (returning or destroying data, severing users access, and so on). 

Financial services regulators have imposed specific requirements in the PRA Supervisory Statement SS1/21 and FCA rules on outsourcing that UK banks and insurers must follow – meaning they need to get boards to approve material outsourcing, notify regulators of critical engagements, and ensure robust exit plans for each critical supplier.

Transformation programmes

Project Management Offices (PMOs) are increasingly being asked to ensure that significant initiatives – including those driven or supported by technology – successfully deliver their intended business value.  

Traditionally, the PMO focused primarily on timelines, budgets, and deliverables, but these are now being asked to take a holistic view, aligning technology initiatives with overarching business goals and continuously tracking the realisation of benefits throughout the project lifecycle. This involves closer collaboration with stakeholders, the establishment of key performance indicators, and the implementation of robust monitoring mechanisms to measure outcomes against expected value.  

By acting as a bridge between technology and business strategy, this enhanced PMO role helps ensure that investments in technology aren’t only delivered on time and within budget but also generate tangible benefits such as increased efficiency, revenue growth, or improved customer satisfaction.

During 2025, we observed a growing trend in the expansion of the PMO remit, which has significant implications for internal audit and risk functions. 

Traditionally, internal audit teams have focused on assuring the achievement of timelines/milestones, risk management, and the integrity of functional and non-functional delivery. However, with PMOs now overseeing the delivery and measurement of value from major technology investments, internal auditors must broaden their scope to include the assessment of value realisation processes. This means understanding how benefits are defined, translated into functional and non-functional requirements, tracked, and reported, and ensuring that these processes are robust, transparent, and aligned with organisational objectives.  

Internal audit functions will need to develop new expertise to evaluate whether the PMO is effectively bridging the gap between technology delivery and business outcomes, and whether the methodologies used for benefits tracking are sound and resistant to manipulation or oversight.

To effectively assess the impact of the PMO in driving value, risk and internal audit functions should adapt their methodologies and frameworks. This includes: 

  • aligning evaluation criteria with organisational objectives, which ensures that assessments are directly connected to the strategic goals and priorities of the business
  • assessing PMO-specific performance indicators, such as benefits realisation, project delivery effectiveness, stakeholder satisfaction, and process improvements
  • regularly review and update risk assessment techniques to capture emerging risks related to project and programme management. 
    This multifaceted approach empowers risk and internal audit teams to provide meaningful assurance and advisory support, ultimately enhancing the value delivered by the PMO.

There’s also the need to consider the governance structures, such as:

  • the clarity of roles and responsibilities
  • data quality in value tracking
  • the regularity and accuracy of value reporting  
  • oversight in delivering functionality and security by design.  

Furthermore, risk and internal audit should move towards a more consultative approach, offering insights and recommendations to strengthen the PMO’s processes while maintaining independence. By doing so, internal audit can provide assurance that the organisation’s technology investments are delivering the intended value, and identify areas for improvement in value realisation practices.

The FCA requires firms to show transformations deliver fair value under Consumer Duty, tracking benefits and outcomes throughout.

In 2025, firms submitted detailed value realisation reports during reviews, with VROs central to digital, pricing, and customer experience programmes. Regulators also scrutinise resilience after failures - nine UK banks logged 800+ hours of outages, Barclays paid £7.5 million, and Vocalink was fined £11.9 million for infrastructure weaknesses.  

Banks must notify the PRA/FCA of core system changes, and regulators may demand safeguards. The senior manager for technology (SMF24) is personally accountable for disruptions.

Data governance

Data governance remains a foundational priority for organisations. With the rise of AI, data governance is no longer just about quality and access – it’s about trust, accountability, and compliance. Effective governance ensures that data used in AI models is accurate, ethical, and legally sourced, supporting responsible innovation and reducing reputational and regulatory risk. 

As privacy regulations tighten across the UK and EU, especially around personal data and automated decision-making, organisations must demonstrate control and transparency over how data is collected, processed, and used. Poor governance undermines decision-making integrity, exposes organisations to fines, and erodes stakeholder confidence. 

Strong data governance enables better risk management, supports auditability, and creates a resilient foundation for digital transformation. It’s not just a technical concern – it’s a strategic enabler.

Data governance risks have intensified due to regulatory reforms and the rapid integration of AI. The UK’s Data (Use and Access) Act and the EU’s Data Act introduce new obligations around data portability, transparency, and automated decision-making, with many provisions becoming enforceable by mid-2026.  

The rise of AI has also shifted governance from a back-office function to a frontline risk discipline, requiring tighter controls over data lineage, quality, and ethical use.  

Fragmented data landscapes and siloed ownership models are emerging as key vulnerabilities. Organisations must now modernise governance frameworks to ensure decision-making integrity, regulatory compliance, and AI readiness – or risk falling behind.

Internal audit and risk teams should provide assurance on whether the to provide assurance over the organisation’s data quality and updated privacy standards, and controls, and whether these meet new data governance obligations.  

Areas of focus should include: 

  • reviewing data governance frameworks for alignment with UK and EU regulatory updates
  • conducting a comprehensive assessment of the accuracy, completeness, consistency, and timeliness of data used in AI models and analytics
  • performing a structured evaluation of current data management practices against recognised standards and frameworks (eg, DAMA-DMBOK, NIST AI RMF)
  • testing the effectiveness of data governance controls, with a focus on policy, standards, and quality, oversight, compliance, data architecture, issue management, data culture, data literacy, and data asset valuation
  • evaluating how data is managed throughout its lifecycle – from creation and usage to storage and eventual disposal – to ensure compliance, efficiency, and risk mitigation
  • reviewing oversight mechanisms to ensure that external vendors and partners manage data in accordance with contractual obligations and regulatory standards. 

The FCA has intensified scrutiny of data governance in financial services, particularly around AI and ESG disclosures. In 2025, several firms underwent S166 reviews focused on data lineage and documentation. The regulator expects firms to demonstrate robust governance over data flows, including third-party data sources, and to ensure transparency in automated decision-making. Failure to meet these expectations may result in enforcement action or supervisory intervention in 2026.

Zero trust security

Zero trust security is a modern cyber security framework that operates on the principle of ‘never trust, always verify’. It assumes that no user, device, or system, whether inside or outside the corporate network is inherently trustworthy. Instead, it enforces rigorous identity verification controls, the principle of least-privilege access, network micro-segmentation (a technique that divides a network into secure zones and enforces security at multiple internal points, rather than relying on a single perimeter), and continuous monitoring to minimise risk.  

The widespread adoption of cloud computing, remote working, and increasingly sophisticated AI-driven cyber-attacks has rendered traditional network perimeter-based security models increasingly obsolete. As a result, zero trust has emerged as a strategic priority for businesses seeking to protect sensitive data and critical systems from external and internal threats. This is no longer merely a technical shift; it’s a strategic imperative in how cyber risks are managed today. 

In the last year, zero trust security has transitioned from a strategic initiative to a baseline architecture for enterprise security. The rise in AI-driven ransomware, phishing-as-a-service, and insider threats has accelerated the adoption of zero trust as a key defensive measure, with organisations now treating it as the default security model.  

Recent developments include the widespread replacement of VPNs with Zero Trust Network Access (ZTNA) – which provides secure, identity-based access to applications without exposing the entire network – for remote and hybrid workforces, the integration of AI into real-time threat detection and adaptive access controls, and the expanded use of micro-segmentation to prevent lateral movements within networks.  

Organisations are also extending zero trust principles to cloud platforms, third-party ecosystems, and operational technology environments. As more organisations embrace the continuous adaptive trust model (a security model that continuously evaluates and adapts access based on real-time risk signals), the focus has shifted to dynamic, risk-based access decisions that strengthen security and reduce the impact of breaches in today’s evolving threat landscape. 

As zero trust security matures across organisations, risk and internal audit functions should play a critical role in ensuring its effectiveness, alignment with business objectives, and compliance requirements.  

Key actions include: 

  • assessing zero trust maturity using frameworks such as NIST 800-207 or the Cybersecurity and Infrastructure Security Agency’s (CISA’s) zero trust maturity model, and mapping controls to compliance requirements (eg, GDPR, HIPAA, ISO 27001)
  • ensuring senior management and the board embed zero trust principles into enterprise risk management strategies
  • evaluating identity and access controls across users, devices, networks, applications, and data
  • considering using real-time behavioural analytics to assess and strengthen privileged access management arrangements
  • reviewing network segmentation, including micro-segmentation and ZTNA for remote users
  • verifying continuous monitoring and logging across endpoints, cloud services, applications and user workarounds (eg, legacy network segments left open
  • assessing third-party and supply chain access for alignment with zero trust principles.  

Although, zero trust is not mandated by name, frameworks such as the European Central Bank’s (ECB’s) cyber resilience expectations and US Federal Financial Institutions Examination Council (FFIEC) guidelines implicitly require these principles.

In 2025, large banks reported that regulators examined controls around insider access and network segmentation which are core zero trust elements. SWIFT’s customer security programme now includes controls aligned with zero trust, such as isolating payment systems 

The trend is for zero trust by design to protect critical assets like payment systems and customer data, supporting regulators’ push for continuous monitoring and strong access governance to prevent breaches that could threaten financial stability.

Deepfakes and disinformation threats

Deepfakes are AI-generated media - video, audio, or images that convincingly mimic real people or events. Originally developed for positive use in entertainment and education they’re now widely exploited by cybercriminals to create deceptive content.  

The UK Government calls deepfakes "the greatest challenge of the online age." 

Deepfakes are used to enable fraud (eg, impersonating a CEO’s voice to approve transfers), reputational damage (fabricated videos of executives), or social engineering. Disinformation can damage firms by circulating false rumours to depress stock prices or erode trust (eg, fake news about product defects).  

The UK’s National Cyber Security Centre (NCSC) reports deepfake use in fraud has surged 400% in 18 months, driven by free or low-cost AI tools. With an estimated 8 million deepfake videos occurring in 2025 (up from 500,000 in 2023) and human detection rates as low as 24% for high quality videos organisations face a new frontier in cyber defence. 

In 2026, the UK faces a continued threat as a result of the continued rise of deepfake threats due to technological advancements reducing the barriers to entry. There will also be an increase in the availability of solutions to facilitate real-time face-swapping and voice cloning during live video calls or streams. 

Gartner predicted that by 2026, 30% of enterprises will move beyond traditional ID verification methods, as these are considered unreliable due to the rise of deepfakes. These developments necessitate urgent integration of AI-driven detection tools and alignment with evolving regulatory expectations.  

Global regulators, including in the US and EU, have flagged systemic risk from large-scale generative AI fraud, with projected losses reaching USD 40 billion 

Transparency obligations under the EU AI Act require mandatory disclosure of AI-generated content. Providers and deployers of AI systems creating or manipulating content (including deepfakes) must clearly inform users that the content is artificially generated or altered. This applies to images, audio, and video, and especially to content intended to inform the public on matters of interest (eg, news, political campaigns). 

Internal audit and risk teams should confirm that layered controls are in place to counter synthetic media risks. 

Key focus areas include: 

  • verifying the use of out-of-band confirmation for high-risk transactions such as payment approvals or vendor changes
  • ensuring use of multi-factor authentication for executive decisions on platforms like Teams or Zoom
  • confirming synthetic media threats are embedded in enterprise risk frameworks, with Boards receiving UK-specific threat intelligence updates
  • reviewing the deployment of AI-based detection tools, benchmarked against initiatives like the Alan Turing Institute and ACE trials
  • evaluating the use of voice authentication controls such as 'safe phrases' and layered checks
  • confirming compliance with regulatory requirements, including the EU AI Act 2024, which mandates transparency for synthetic content
  • confirming corporate affairs and investor relations have protocols for immediate disclosure.

The FCA requires financial services firms to manage AI-driven fraud within existing frameworks (eg, Consumer Duty, operational resilience) and to invest in fraud prevention and cyber resilience. 

Regulators like the FCA and US Securities and Exchange Commission (SEC) warn that deepfakes and false information can manipulate stock prices or cryptocurrency markets. In March 2025, the SEC’s Investor Advisory Committee flagged AI-enabled fraud as a major investor threat. Financial services firms now need to embed disinformation scenarios in market abuse surveillance, requiring trading desks to verify unusual news before acting. Banks also need to train staff to manage client concerns during false solvency rumours, while regulators expect rapid public clarification aligned with exchange rules.