Article

Do you have an audit framework in place for AI?

Stephen Thompson Stephen Thompson

As organisations continue to adopt Artificial Intelligence (AI), internal audit is expected to offer assurance over its application and use. But does your internal audit team have an appropriate AI auditing framework to effectively assess the issues and offer robust challenge?

By Stephen Thompson, James Durrant and Timothy Clifton-Wright 

Getting the right skills in place

While AI and machine learning have been around for a while, the rapid advances in the last decade have led to a boom in adoption, leaving a significant skills gap in the sector. AI is inherently complex and organisations applying it must focus on training and upskilling their staff – not only to use it effectively, but also to gain assurance over its application.

Drawing on these skill sets, organisations should design and implement an effective auditing framework for AI. The framework will help maintain oversight, make sure AI meets regulatory expectation and that it is applied as intended.

Creating an AI audit framework

An effective auditing framework should look at the following areas:

Strategy and governance

As with any new technology or business change, it should be applied within the context of the wider business strategy and corporate objectives. This should be considered at board or senior management level, with the appropriate structure and processes in place to manage and monitor the use of AI across the firm.

Adequate communication plans are essential to make sure all stakeholders understand how the AI will be applied. This includes information on how employees may be disrupted by AI solutions, and any available options for retraining. Organisations should establish mechanisms for customer feedback on the use of AI, including any questions or complaints regarding the use of their data, or they may be fined under GDPR.

AI technology control environment

All AI solutions and the data they rely upon reside in infrastructure, whether that is hosted on premises or on the cloud. As such, this infrastructure needs to be properly managed, maintained and secured.

To achieve this, teams supporting AI infrastructure must have the right skill set with additional training available as required. Any third parties hosting or supporting AI infrastructure or solutions need to be proactively managed, and only authorised individuals should be given access.

Data governance and quality

All AI is underpinned by data. If the quality of this data is poor, or if multiple systems holding data are not well integrated, this could lead to inaccurate AI outputs and predictions, which can affect decision-making processes.

To avoid this, data cleansing is vital and many organisations are already working to transform their data environment for effective AI implementation. Such strategies need careful planning, with alignment to the business objectives and ongoing monitoring to deliver the intended value.

AI model development arrangements

The AI development lifecycle is much the same as the software development lifecycle, with additional considerations around monitoring and autonomous decision-making processes. The development of AI solutions must be formally managed and controlled, even in Agile environments, in order to deliver the intended results and meet quality standards. This relies on an appropriate and consistent AI development lifecycle for the production of all AI solutions.

The lifecycle should consider the explicability of any solutions implemented and any exceptional circumstances, such as:

  • when a non-explicable model is required, such requirements should be well-documented and understood
  • exception-handling routines should be established to manage situations when an AI model’s activities fall outside of the expected operating parameters
  • access to source-code and development arrangements should be carefully administered to ensure they cannot be accessed by unauthorised individuals and to prevent inappropriate changes from being made.

Find out more about building an AI explainability model >>

Ethics and human bias

When developing and testing solutions, including the data and models underpinning them, AI developers should pay careful attention to ethics and human bias. This includes having a comprehensive understanding of the biases, including those of the team, the data, implementation methods and the effect of unconscious bias, which is harder to mitigate.

The ethics and trustworthiness of using AI to help make decisions should be considered before adopting a solution, and decisions should be transparent, explicable, and auditable when possible. It should also be robust, with reliable, repeatable outputs. Furthermore, all those who are subject to AI processes should be made aware of this in accordance with GDPR, and their data should not be processed by AI or otherwise, for any other reasons than for the purpose it was originally obtained for.

Find out more about trustworthy AI >>

What to do now?

Internal audit functions will be required to provide assurance over the risks associated with their firm’s use of AI and the design, performance, monitoring and governance of AI-based processes. In light of this, internal audit functions need to start preparing themselves for the introduction of AI. This will involve gaining an understanding of how AI operates and upskilling themselves so that they can provide advice and assurance to the business on the challenges and risks that AI will bring. Given how quickly AI is evolving, this will be an on-going task for internal audit functions. It will be critical that they can keep pace with developments in AI so they can continue to provide a timely service that also adds value.

This is posing a real challenge for internal audit functions as many are not currently well-equipped to provide the assurance that their audit committees and board are looking for. This is even more critical now, given that a number of organisations are only just starting to embark on their AI journey.

Article
AI – time to adopt an explainability model? Find out more