Increased computing capabilities, advanced modeling techniques, and commensurate increases in complexity among financial products have resulted in great reliance on mathematical models within the banking and insurance industries. These financial models are instrumental in developing robust risk management frameworks; however, the potential misuse and failures of these models can present large risks as well. Proper model validation and governance policies are necessary to mitigate financial model risk.
Why financial models break down
Below are four high-profile examples of model risk where financial models broke down and contributed to significant losses at financial institutions.
Model was overleveraged
In 1998 a hedge fund, Long-Term Capital Management, blew up—in part because of the use of complex mathematical models. The hedge fund was formed in 1994 to use arbitrage strategies that were identified via mathematical models. For a few years, these models worked as designed, resulting in significant profits and an increased level of confidence for these trades. This led the firm to heavily rely on leverage to maximize returns. At the beginning of 1998, the firm was estimated to have over $125 billion of assets and just under $5 billion of equity for a leverage ratio of 25 to 1. In late 1998, the relationships expected by the models broke down, resulting in large losses for the firm. The losses from these trades and the leverage employed by the firm resulted in a liquidation of the firm, a rescue by the Federal Reserve Bank of New York, and increased scrutiny for the use of mathematical models in trading.1
One way to manage model risk is to implement prudent model validation and governance processes.
Model was manipulated and not externally validated
Enron is typically cited as an example of accounting fraud and audit failures. However, Enron is also an example of the need for external model validation and model governance. Enron became involved in complex derivative transactions that were repeatedly valued in a “mark-to-model” framework.2 The complexity of the derivatives and models provided Enron with flexibility in “valuing” the derivative contracts and manipulating financial statements. An external validation of these models may have helped prevent such abuses.
Model did not account for potential stress scenarios
A large portion of credit extended during the build-up to the recent housing crisis in the United States was funded by non-agency residential mortgage-backed securities. These were financial instruments that were created by Wall Street and others within the mortgage industry, rated by credit rating agencies, and invested in by insurance companies, banks, and all sorts of funds. The models used to rate and analyze these securities were largely developed using product and performance data that did not include a national housing collapse. Occasionally, the data would include regional patches of stress, but the majority of it did not include a national collapse because it had not happened in the United States since the Great Depression. The models used to rate and analyze the deals were generally reasonable, but the stress scenarios they produced underestimated the risk inherent in the deals. This underestimation of risk (along with several other significant factors such as fraud and the rise of the “originate to distribute” model) contributed to the decision to extend credit to non-creditworthy borrowers. When these borrowers defaulted at rates that, in some cases, exceeded the highest default rates estimated by stressing the models, the investors in these securities and the financial system in the United States experienced a significant amount of financial stress.
Model formula was flawed
In July 2012, JPMorgan Chase reported a trading loss of nearly $6 billion on certain derivative contracts. The bank indicated the loss was incurred after adopting a flawed mathematical formula that understated the risk of certain contracts. In response to the loss, the bank changed its models. The result of changing its models was that the estimated risk of the contracts had doubled.3 Although the terms of the contract had not changed, the bank increased its estimates of the risk and capital required for the trade.
Independent review reduces model risk
These examples of model risk and failures raise the following questions:
- How influential are mathematical models in the operating results of financial companies?
- How much confidence should we place in mathematical/statistical models?
- How do you ensure the models are reasonable?
As part of Basel II and the proposed Basel III rules, banking institutions are required to have an independent model validation on most of the models used by a bank. The validation may be performed internally or by an external third party, but the group performing the validation must be independent of the group that developed the model.
Casualty insurers implicitly have a reserve model validation performed annually by an independent actuary via the actuarial opinion. This model validation is known as "outcome analysis," where the results of the reserve model(s) are compared to an independent estimate of the reserve estimates derived independently by an outside actuary.
The use of financial models will likely increase as technology allows; one way to manage model risk within financial institutions is to implement prudent model validation and governance processes. The rest of this article will focus on describing an effective and prudent model validation process.
A sample model validation framework
A model is a simplification of reality, and there is trade-off between complexity and usability. Model validation is a process designed to identify the limitations of a model and help manage the associated risks.
The detail and scope of a model validation project is dependent upon the use, complexity, and associated risks of the model being reviewed. A more complex model generally requires a larger scope for the validation. Similarly, a greater reliance on the model, either through strategic planning or use in financial statements, generally requires a large scope for the validation because of the potential risks involved in relying on the models. A complete model validation includes a number of suggested steps as outlined below. The model validation should be performed by an independent party that is familiar with both the financial instrument or exposure being modeled and the techniques used in the model.
Use of the model
A model validation generally starts the way you would start when building a financial model: by understanding the use of the model. This will help shape the level of detail of the model validation and allow the model validation group to focus on key areas of the model throughout the review. For example, if you are reviewing an economic model for stress tests, then it is critical that the results produced from the model are reasonable and steady in stressful environments. If the model is being used for pricing, where you are trying to develop an average cost, then the results produced by the model in extremely stressful scenarios may not be as important in the model validation. The model validation should identify the use of the model, whether the model is consistent and applicable for the intended use, and it should ensure that the model is not being used for exercises that are outside of the capabilities of the model.
Review of data
A second step of a model validation is to review the data used to develop the model. The model validation group should start with the same data that was used to develop the model. The model validation review of the data could include univariate analysis to independently identify potential variables to include in the model, a review of the range of the response being modeled (e.g., the minimum and maximum default rate in the data by calendar quarter), a review of the number and magnitude of stressful events included in the data, and more. External data not considered in the model development process could be appended to the validation dataset for a potential review of other variables that may be influential in the modeling objective that were not considered in the development stage of the model. The intent of this segment of the model validation is to understand any implications or limitations the data used to develop the model may have on the estimates produced from the model. For example, data used to develop mortgage credit models in the early 2000s generally did not include severe stress environments in the housing market. Even today, the ultimate resolution of a stressful environment is not included in mortgage data, as losses are still developing. This fact is a limitation about which users of mortgage credit risk models must be aware.
Assumptions and methodologies
A next step of a model validation process is to review the selection of the type of model and the associated assumptions in the model. In this step, we want to determine if the assumptions of the model are reasonable approximations of reality. The selection of the model type should be documented by the model development team and include a discussion of the types of models considered but not selected. The form of the model should be able to reflect significant properties of the response variable being modeled. Sticking with credit risk modeling as an example, a logistic model is commonly selected to estimate default rates. The logistic model has desirable attributes such as the ability to model dichotomous events (default or no default), a bound between 0 and 1, and others. However, the logistic model is not always appropriate, particularly when average default rates are around 1%. In a stress test of these models, default rates can easily spike to greater than 10% or 20% as a result of the shape of the logistic curve. These results are generally not consistent with actual experience. The model validation group must be aware of the properties of a logistic model and be able to determine and assess the appropriateness of the assumptions with the use of the model.
If the model under review is a regression model, a model validation should include a review of the variables and coefficients in the model, the methodology used to select the variables, and the goodness-of-fit results for the model. This review would include an understanding of any transformations performed on the data in the regression model for reasonableness, as well as discussions with the model development team on variable selection to understand the process utilized in developing the model.
If the model is not a regression model, a model validation should include a review of the form of the model, the inputs into the model, and the sensitivity of the model to these inputs. Part of the model validation should also include discussions with the model development team on how the model was developed, the reasoning for ultimate model selection, and limitations of the model.
Model performance
Quantitative financial models may produce inconsistent results under certain assumptions. Therefore, a third step of a model validation is to take a step back from the quantitative aspects of the model and assess the reasonableness of the output from the model. This portion of the model validation will rely heavily upon the model validator’s professional expertise and judgment. The review of model performance should include stress testing/sensitivity analysis, statistical tests (performed either independently from or with the model development team), and other evaluations commensurate with the type of model and scope of the validation. The intent here is to understand the limits of the model and the circumstances that indicate whether the model is appropriate to use or not. For example, how would the estimated default rate change on credit card exposures in an economic stress environment where unemployment increases from 5% to 9% over a one-year period? Is this result reasonable? Is the impact of a change in the unemployment rate consistent across all borrowers or does it affect a particular cohort of borrowers more than others? Are the results consistent with historical data?
This phase of the model validation is similar to the prior phase; however, in this phase we are stepping back from focusing on the form of the model and instead are focusing on the results. We are reviewing the sensitivity of the model to changes in each explanatory variable (including economic forecasts) or to changes in development patterns or other influential factors in the model.
Outcome analysis
Outcome analysis takes the model performance phase one step further by comparing the actual estimates produced from the model against historical data, as opposed to identifying the limitations of the model. Examples of outcome analysis include back-testing, out-of-sample testing, and actual-to-expected comparisons on an ongoing basis, and there are others. Outcome analysis should be performed prior to implementing the model and done at least on an annual basis after implementation to ensure it is performing as expected. Error limits should be developed for the outcome analysis results, and if the actual errors from the model exceed those limits, predetermined actions should be required. If the model is recalibrated or updated on an annual basis, limits should also be developed that monitor the size and frequency of reestimates. If updating the model repeatedly results in large changes in the estimates of the model, then certain actions should be required, including external model validation, a recalibration of the model, or even development of an entirely new methodology or type of model. The action type and triggers should be set out in advance of and in accordance with the use and risk of the model. These policies should be written and included in the model governance policies of a financial institution. The initial model validation could be used to help set error limits for the model.
Model validation report
The final step of a model validation is communication of the results through a model validation report. The model validation report should be a written report that documents the model validation process and results. The report should highlight potentiation limitations and assumptions of the model and may include suggestions on model improvements.
Conclusion
The use of financial models is common in the financial industry. As these models increase in complexity and use, it is important that the users of financial models understand the risks and limitations associated with them. A model is a simplification of reality, and there is trade-off between complexity and usability. Model validation is a process designed to identify the limitations of a model and help manage the associated risks. An effective model validation process should include an understanding of the model use, data, methodology, and results.
Jonathan B. Glowacki is a Fellow of the Society of Actuaries, a member of the American Academy of Actuaries, and a Chartered Enterprise Risk Analyst through the Society of Actuaries. Jonathan’s primary area of practice is within the credit risk and banking sectors. He has provided several model validation and other consulting services to banking institutions, insurance companies, financial guaranty insurers, mortgage insurers, and government agencies. Jonathan has published articles on these topics for organizations such as the Professional Risk Managers' International Association (PRMIA) and the Society of Actuaries (SOA).
1 Lowenstein, Roger (2000). When Genius Failed: The Rise and Fall of Long-Term Capital Management.
2 McLean, Bethany & Elkind, Peter (2003). The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron.
3 Moore, M.J., Kopecki, D., & Keoun, B. (October 12, 2012). JPMorgan deploys new risk model for derivative bet. Bloomberg. Retrieved December 17, 2012, from http://www.bloomberg.com/news/2012-10-12/jpmorgan-deploys-new-var-model-for-cio-bet-reports-lower-risk.html.