article banner
Article

AI report signals future financial services regulation

Jamie Crossman-Smith Jamie Crossman-Smith

A joint report on artificial intelligence in financial services highlights the data, model risk and governance obstacles that firms must navigate. Jamie Crossman-Smith discusses the key points and outlines how this is the first step towards introducing regulation.

The Bank of England (BoE) and the Financial Conduct Authority (FCA) published a report on artificial intelligence (AI) in financial services, exploring the numerous barriers to adoption, challenges, and risks related to the use of AI. The document lays out how stakeholders can look to address these issues and mitigate potential risks.

One of the key findings is that AI is powerful and rapidly evolving and that the sector is already leveraging it in many ways. AI can bring opportunities and benefits to stakeholders, but can also amplify certain risks, as well as give rise to new risks and organisational challenges. As models become increasingly sophisticated, with improved speed, scale and complexity – including autonomous decision-making – the sector must consider the impact of using AI.

While the report does not discuss any new regulatory guidance or expectation in the sector, supervisory bodies are increasingly concerned about the use of AI. Firms should take this an early signal that they should look to identify all AI-linked processes and how these interact with the organisation, both internally and externally.

While many stakeholders struggle to understand and explain AI within financial services, this report gives a rough blueprint on the issues firms should look to address. This includes the availability and quality of data, assessing the risk of using AI models, the ability to explain outputs and developing AI frameworks for governance.

Data drives AI

The BoE and the FCA noted the link between AI and data. The availability of data has contributed to the growth of AI and helps train models. Many issues around AI stem from the data itself, instead of the systems or algorithms.

As such, the quality of the data continues to be essential. Firms must recognise that it's more valuable to understand the attributes of the data - including provenance, completeness, and how representative they are. The processes around how the data are also important, as inputs can change. Organisations must ensure they document, version and monitor these frameworks.

Firms should also consider AI’s ability to sift through unstructured data, which may come from different sources and can produce unique information. However, this ‘alternative’ data can also increase data quality issues as it's often obtained from third-party providers. This raises additional challenges of quality, provenance, and in some cases, legality.

The increasing use of data and subsequently AI raises concerns over the appropriate governance structures within the sector. As internal and external data strategies are aligned with the use of AI, they may have to streamline standards and structures for the use of this information.

Better AI could raise unknown risks

The report noted that while the risks related to the use of AI in the financial sector are not new. What is concerning is the scale and breadth of AI use, the speed at which systems work, as well as the complexity of the underlying models. These can exacerbate existing problem or present unseen issues.

The supervisory bodies said that complexity is the core challenge. They noted that this includes the complexity of the inputs (such as many input layers and dimensions), complex relationships between variables, the intricacies of models themselves (e.g., deep learning models), and the types of outputs, which may be actions, algorithms, unstructured and/or quantitative. This becomes even more complex when several AI models operate within a network.

Explainability will be key here. Firms must be able to explain output clearly, not just demonstrating features and parameters of AI models, but articulating how their models and systems affect end consumers. This requires having the right technical and technological expertise, as well embedding understanding of the systems into various stakeholder groups.

In addition, firms will have to carry out ongoing monitoring and reporting of their AI model’s performance. This will minimise errors and ensure that these models are behaving as expected.

Article
Blockchain and AI: the future of anti-money laundering? How can tech solutions mitigate the risk of financial crime?
Article

How is big data modernising risk control management?

Leverage existing governance systems for AI

While AI models and systems will become increasingly autonomous, they will need human supervision. While the autonomous decision-making capability of AI systems raises implications over the appropriate framework to govern the technology and outcomes, a type of human assurance could mitigate those risks if the right controls are in place. These frameworks will support effective accountability and responsibility.

Firms can leverage existing governance structures as the foundation for building robust frameworks for AI models and systems – these must be embedded into other organisational risk and governance processes and will also interact with them closely.

Firms should remember to demonstrate the risk and materiality of the specific use-case in their structures even when existing governance frameworks are applied to AI. They should seek to leverage and adapt existing governance structure to manage AI, including model risk management (MRM), and data governance frameworks, and best practice such as DCAM and DMBoK.

To produce these standards, firms could look to set up internal groups that manage AI governance and ethics with one or more senior managers holding the main responsibility for the outputs, compliance, and execution against the governance standards. Firms must develop transparency around AI – which is driven by explainability. Firms cannot simply explain the theory and deem everything else too complicated; they must be able to show their end-to-end understanding and use of AI and how it supports strategic and commercial goals.

The report stated that governance in AI is more effective if it includes diversity of skills and perspective, as well as a full range of functions and business units. Taking a cross-functional approach helps manage the complexity of systems and associated data challenges. Overall, firms should seek to raise firm-wide understanding and awareness of the benefits and risk of AI.

What’s next for AI in financial services?

While the conversation on the safe adoption of AI is only just beginning in the UK financial sector, firms should lay the foundations in their organisations now for future regulation. Supervisory bodies are keen to maintain dialogue between private and public groups to support the responsible use of AI.

We already know that the BoE and the FCA will publish a discussion paper on AI later this year to further discuss what role a regulator might play – that an industry body for AI could support the voluntary development of and adherence to new codes of conduct, as well as an auditing regime.

Historical patterns of regulation show that often regulators will encourage self-regulation through these types of measures as an intermediary step towards fully fledged regulation. Combined with increasing scrutiny over data protection, these actions pave the way towards regulatory supervision of AI in financial services.

Early and safe adopters of AI could see the biggest benefits. The first step will be to identify all AI-linked systems and processes, which allows firms to map out opportunities and risk.

To discuss how to prepare for the future of AI in financial services or learn more about these tools and how we can help you make the most of the technology available, contact Jamie Crossman-Smith.