Model risk: drawing on BCBS 239 to apply new SS1/23

ContributorName(contributor, true)
Vivian Lagan outlines where SS1/23 rules overlap with BCBS 239 on data aggregation and risk reporting and how to use these synergies to improve data quality for modelling.

Used effectively, models are a useful tool to manage credit, operational and market risk, and more. But they rely on good quality data, and regulators are increasingly concerned about the level of assurance banks can provide.

New principles for model risk management has been finalised last week, but at this stage the policy will only be applicable to organisations with internal model (IM) approval to calculate regulatory capital requirements. Due to the outstanding final definition of ‘Simpler-regime Firms’ under the Strong & Simple Framework, the PRA will provide an update on the approach for all other firms, including ‘Simpler-regime Firms’, at a future date, once the definition of a ‘Simpler-regime Firm’ has been finalised.

The policy will take effect 12 months after publication on Friday 17 May 2024. Organisations that first receive permission to use an internal model to calculate regulatory capital after this date will have 12 months from the grant of that permission to comply with the expectations in SS1/23.

By the implementation date, firms must conduct an initial self-assessment to map their readiness to adopt the new principles, identifying areas they need to build or enhance their MRM capabilities and framework. Self-assessments should be updated at least annually thereafter, with findings and remediations plans from the self- assessment documented and shared with the board.

The five principles

Taking a closer look at SS1/23, the PRA isn’t suggesting a whole new approach to model risk management. The key principles cover the following five areas. In addition to the increased scope of models and consequent increased validation pipeline, the PRA is introducing principles with a focus on underlying data quality. It is formalising expectations around data sources and lineage, and how that data is used in modelling, across every stage of the model life cycle.

Principle 1 – Model identification and model risk classification

Firms should have an established definition of a model that sets the scope for MRM, a model inventory, and a risk-based tiering approach to categorise models to help identify and manage model risk. Specifically, when assessing a model's complexity, you need to consider the nature and quality of the input data, the use of alternative and unstructured data, and the potential for designer or data bias to be present.

Principle 2 – Governance

Firms should have strong governance oversight with a board that promotes an MRM culture from the top through setting a clear model risk appetite. The board approves the MRM policy and appoints an accountable individuals responsible for implementing a sound MRM framework. This includes adequate systems and infrastructure to maintain data and system integrity. Data quality procedures and standards should set clear roles and responsibilities for managing quality of data used for model development, and the rules and standards for data quality, accuracy and relevance.

Principle 3 – Model development, implementation and use

Firms should have a robust model development process with standards for model design and implementation, model selection, and model performance measurement. Documentation should describe the use of data, data sources, data proxies, and the results of data quality, accuracy and relevance tests. This includes documenting any data adjustments, which are subject to validation; and details of any interconnected sources, alternative and unstructured data should be recorded in the model inventory. You should also ensure no inappropriate bias is used to develop the model, and that data is compliant with data privacy and other data regulations.

Principle 4 – Independent model validation

Firms should have a validation process that provides ongoing, independent and effective challenge to model development and use. The validation function should provide an objective, unbiased and critical opinion on the relevance and completeness of the development data with respect to the underlying portfolios, products, assets or customer base the model will be used for.  It also shares the responsibility for the verification of model processes and system implementation which includes verification of the representativeness of internal or external data used as model inputs and compliance with internal data quality control and reliability standards.

Principle 5 – Model risk mitigants

Firms should have established policies and procedures for mitigating model risk when models are underperforming, and procedures for the independent review of post-model adjustments (PMA).  All PMAs should be subject to an independent review, including assessing inputs to ensure data integrity and to ensure it remains representative of the underlying portfolio.

Reshaping model risk processes to new regulation

Most firms will already have processes in place in these five areas; they just need to map and reshape them to meet new regulatory expectations. To date, most financial institutions have based their data governance approaches on BCBS 239, which was introduced in 2013 to improve risk data aggregation capabilities.

Although the BCBS 239 principles were initially aimed at global systemically important banks (G-SIBs), the principles have become a standard across the banking industry, as local supervisors follow the Basel Committee on Banking Supervision (BCBS) recommendations and apply the principles to domestic systemically important banks (D-SIBs). BCBS 239 is therefore a good place to start when mapping readiness for the PRA’s new principles regarding data.

Creating synergies with BCBS 239

Starting with your practices for BCBS 239, it’s fairly straightforward to see the overlap with the PRA’s new requirements and to find the areas where you’ll need to undertake additional work.

Governance, data architecture and IT infrastructure

Under BCBS 239, firms need strong governance arrangements with defined roles and responsibilities, open forums and clear escalation channels to improve accountability and reporting. These elements are supported by data taxonomies to improve communication and standardise terminology, and effective policies and procedures on data management.

How this maps to SS1/23:

Principle 2.2

Looking at the senior management function (SMF) accountability for model risk management framework, including resourcing, adequate systems and infrastructure to ensure data and system integrity; also effective controls and testing of model outputs.

Principle 4.1

Calling for independent validation for an objective, unbiased and critical opinion on the accuracy, relevance and completeness of the development data, output and reports.

Principle 4.2

Requiring robust independent review to critically analyse the quality and extent of model development evidence, including the relevance and completeness of the data used to develop the model.

Risk data aggregation capabilities

Under BCBS 239, data aggregation should be largely automated to support accurate and reliable risk data for reporting purposes, including during stressed conditions. Data must be complete, timely and available on an ad hoc basis. Key tools such as a data dictionary, data quality controls and flexible data aggregation capabilities and process documentation support this outlook.

How this maps to SS1/23:

Principle 2.3

Covering policies and procedures for model tiering, including data sources, quality management procedures (including rules and standards, accuracy and relevance); also specific risk controls and criteria for use of alternative or unstructured data, or information sources. For data intensive models, firms must consider responsibilities for managing data quality.

Principle 3.5

Targeting model development documentation, including: the use of data, a description of the data sources, any data proxies, and the results of data quality, accuracy and relevance tests.

Principle 4.3

Looking at process verification around model inputs from internal or external data, to make sure they’re representative of the underlying portfolios, products, assets or customer base the model will be used for.

Risk reporting practices

To improve risk reporting practices, BCBS 239 focuses on improving accuracy over aggregated risk data, comprehensiveness, clarity and frequency. This depends on having effective data quality dashboards, robust risk management reporting practices, good issue management and recognition of any limitations of material gaps.

How this maps to SS1/23:

Principle 2.6

Looking at externally developed models, third-party and vendor products to verify the relevance of vendor supplied data and their assumptions. This includes models developed by parent groups, and subsidiaries should check the data is relevant for the intended model.

Principle 3.2

Covering the use of data, including checking that it’s suitable for the intended use case, consistent with the methodology, and representative of underlying portfolios, products, assets, or customer base. Model development processes should assess potential data bias and ensure regulatory compliance. Firms should document data adjustments or use of proxies, which are subject to validation. The model inventory should include any interconnected data sources, and record alternative or unstructured data, which should be reflected in the model’s tier classification.

Preparing your SS1/23 readiness assessment

In the short term you must review your existing data infrastructure, related systems and processes against the SS1/23 principles to complete the PRA’s upcoming self-assessment requirement. This must include key elements of the model life cycle, model development and validation of models, in addition to data infrastructure and related systems. If you find any shortcomings, you need to put remediation plans in place to bring your model risk data quality processes in line with regulatory expectations.

Typical stumbling blocks include manual data collection, and incomplete or inconsistent data. So, it’s essential to improve data aggregation and management capabilities at every stage of the model life cycle: from collection and storage to analysis and reporting.

With the final rules landed, it’s important to think about how to leverage your existing investments in data to enhance the quality of data sources and systems for models. This will boost confidence in your model reliability and support decision-making processes.

For more information and advice, contact Vivian Lagan.

Get the latest insights, events and guidance for financial services professionals, straight to your inbox.