article banner

TRIM review: what's the impact for FS regulation?

Tony Hughes Tony Hughes

Last week, The ECB published the results of the five-year TRIM review exploring the internal modelling methods used by banks for Pillar 1 of the Basel capital standards. Tony Hughes considers the impact on financial services regulation.

Following the analysis, European regulators moved to increase capital reserves for several banks deemed to be too optimistic in their loss assessments. Given that major regulators often follow each other’s lead, it is likely that the results of TRIM will be used to justify similar capital actions in other jurisdictions. Banks in the UK and further beyond Europe should therefore expect pointed discussions about the TRIM findings with local supervisors in coming months.


Even before the introduction of internal ratings based (IRB) approaches, regulators were concerned about banks using complex models to downplay the scale of the risks they face, allowing them to artificially reduce associated regulatory capital.  The purpose of the TRIM project was to police the potential misuse of models by assessing industry standards, checking the comparability of methods across organisations and by determining whether each individual bank was compliant with the ECB’s rules.

The report focused on the way banks calculate credit and market risk and covered the largest 65 institutions across Europe. Based on its findings, the ECB identified more than 5,800 modelling and governance deficiencies resulting in 253 supervisory decisions, adverse findings that affected the amount of capital held by banks. Common equity tier one ratios declined by an average of 0.71% as a direct result of TRIM-related interactions between banks and regulators.

Many of the reported transgressions related to low-default portfolios. Because default events are inherently rare, logistic regression style PD modelling becomes incredibly difficult without a huge amount of raw data. LGD modelling, it then follows, simply lacks sufficient degrees of freedom to allow a clear signal to be statistically isolated. With quantitative approaches unavailable, considerable room is left for the use of subjective judgment in determining loss forecasts.

The use of overlays is an easy target for regulators. For these portfolios, banks should take time to carefully document their justifications for any subjective loss assessment, highlighting any and all conservative adjustments applied before arriving at the final numbers. The overlay process needs to be documented meticulously and carefully controlled by the third line if it is to stand up to TRIM-style regulatory scrutiny.

What if a bank really is better at managing risk?

In other areas, quantitative models play a bigger role. Most risk managers would have experience of being sharply criticized by regulators when their model-building methods fell short of industry standards. One novel aspect of the TRIM project was that model outcomes, in addition to model methodologies, were now being compared across banks.

Under the precepts of TRIM, exposures with the same measurable characteristics should have a similar estimated expected loss regardless of the institution that holds the asset. If projections are found to differ across institutions, regulators will conclude that the more optimistic bank has an unrealistic model and will then require associated capital reserves to be increased.

There are a million possible reasons why an institution with effectively the same loans as their peer may consider themselves superior in their business practices. One bank may claim, for example, that their estimated losses should be lower because they underwrite on the basis of a broader set of characteristics or because they have superior delinquent account curing procedures. They may perceive that they have better control of credit lines for loans at risk of default or they may think themselves expert at recovering losses from foreclosed and seized property and equipment.

The list goes on.

If an institution wants to be rewarded for these skills, they will need to empirically demonstrate their superiority over their rivals. This means constructing models that make credit soft-skills explicit, allowing evidence to be provided that better business practices reduce risk. Findings from this research would need to be documented very carefully to ensure regulator acceptance.

Distinct data sets

There are a variety of other reasons why banks may assess loans differently. Because each institution uses internal data in their loss modelling efforts, large discrepancies can stem from the data sets used to produce the estimates.

Imagine that we have two banks, one operates at the subprime end of the credit spectrum, the other exclusively writes prime grade loans. Both are then asked to assess a loan that sits on the boundary between prime and subprime. For one bank, the loan is the best they’ve ever seen; for the other it is the worst.

In this situation, estimated loss rates will differ markedly between the two institutions. Subprime lenders, for example, typically observe much stronger cyclicality than those working with prime data. We use a stark example here, our point is simply that peer comparison of model output can be difficult because banks have access to very different underlying data.

Changing the hypothetical slightly, suppose our two banks currently have very similar portfolios, though their historical experiences diverge sharply – one used to be subprime and is moving upmarket, the other used to be more prime. While a regulator may expect to see similar projected losses given the similarity of the two portfolios in 2021, this may be unrealistic given the very different historical data sets being employed by the two banks.

One interesting way to demonstrate this would be to compare the internal model to one built using a broader, industry level data source. Conversely, banks could estimate models over a shorter time period and thus demonstrate the dynamic nature of their model-based loss estimates. As ever, such additional research would need to be carefully managed and extremely well described.


Banks, obviously, have a clear incentive to minimize capital charges associated with lending activities. It therefore makes sense for the ECB, as a regulator, to be suspicious of complex bank models and to closely police their construction and use in capital adequacy assessment.

We should all recognise, though, that in an alternative universe where banks are always completely honest and never rapacious, an exercise like TRIM would still find substantial variation in capital calculations across different banks.  In this mythical utopia, regulators could still call out the techniques used by the bottom half of the distribution and conclude that they were being deliberately over-optimistic in their risk weight calculations.

This presents a quandary for banks that know their existing models are realistic. If these institutions want to avoid additional capital charges from TRIM-style analyses, they may need to adjust the way models are built and validated and then described to regulators.

It is critical that these efforts highlight any and all conservative decisions made during the model build process.

Dear CFO – PRA feedback on credit risk, ECL and IFRS 9 Find out more

IRB models: managing overseas dual compliance

What are the challenges for dual compliance?