Developing a Deposit Insurance Scheme: Analysis Tools (2024)

Wednesday, Apr 26, 2023

Download Document

Developing a Deposit Insurance Scheme: Analysis Tools

  1. Probability of Default

  2. Risk-related Premiums

  3. Setting Deposit Insurance Premium Rates

  4. Methodologies for Determining the Target Fund

    1. Loss Distribution Approach

    2. Merton-Vasicek Approach

    3. Copula Approach

    4. Stress Scenario Approach

These analysis tools are part of our Toolkit for Developing a Developing a Deposit Insurance Scheme.

There is an extensive literature that offers principles for deposit insurers. These principles include assumptions, beliefs, facts and propositions that are offered to guide deposit insurers’ policies, e.g., IADI [16]. Much of this literature is cited in this document. While this literature is helpful to those tasked with preparing deposit insurance policies and laws, there is a wide gap between principles and the implementation of those principles through practices and procedures. The analysis tools and papers discussed in this section are designed to fill the gap between principles for deposit insurers and the implementation of those principles. These tools were developed by the authors as part of technical assistance provided to deposit insurers seeking to implement recommended practices for deposit insurance pricing and funding. The analysis tools use data on United States insured banks and the financial statements of the United States Federal Deposit Insurance Corporation in case studies. All analysis tools rely on public information and the techniques used for insurance pricing and fund management rely on published research. We discuss technical aspects of insurance pricing and funding, and included case studies and related analytical tools (Excel workbooks and R computer code) as embedded objects in this section.

An important consideration for how an insurer might approach insurance pricing and fund management is the availability of information that is required by pricing and fund management approaches. We address the data availability issue by approaching each topic from three perspectives—high, moderate and low availability of information for implementing pricing and fund management recommendations.

13.1 PROBABILITY OF DEFAULT

The probability of bank failure is typically associated with the likelihood the bank licensing authority revokes the bank’s license (closes the bank) after determining the bank is insolvent or critically undercapitalized. In some jurisdictions banks can be also closed if the chartering authority determines the bank is severely illiquid. Which bank closure criteria, or combination of criteria, applies— insolvency, critically undercapitalized, and severe illiquidity—depends on banking law in the jurisdiction.

INSOLVENT AND CRITICALLY UNDERCAPITALIZED BANKS

Insolvency is a determination that the bank’s equity capital is negative, i.e., the value of its assets is less than that of its liabilities. For the purposes of this document bank failure is synonymous with bankdefault on debt obligations and we use the terms failure and default interchangeably. Asset valuations are based on the jurisdiction’s accounting treatment for assets held-to-maturity (historical cost/book value) and assets available for sale (market value or fair value treatment for infrequently traded assets). Liabilities are valued at their book value.

Critically undercapitalized banks are banks that fail to meet capital thresholds specified by the jurisdiction. The capital thresholds that apply to bank closure laws can be based on book equity-to- asset ratios (leverage ratios) and risk-based capital adequacy measures under Basel capital adequacy standards.

Developing a Deposit Insurance Scheme: Analysis Tools (1)

LOAN LOSS RECOGNITION

A limitation of bank capital adequacy standards is the credibility of reported bank equity capital. As banks incur loan losses it has been shown that there is a reluctance by bank management to recognize loan losses in a timely manner. The incentives for banks to delay recognizing loan losses are increased during periods of economic stress. Delayed recognition of loan losses will result in overstated equity capital. To address this issue, jurisdictions should consider the accuracy of reported equity capital when setting capital thresholds for bank closure policies. In the United States, the prompt corrective action closure policy uses a 2 percent book capital threshold, in part because loan losses in failing banks are typically more severe than those recognized in bank financial statements.

MODELLING BANK DEFAULT

Given a history of bank closures jurisdictions can model the probability of bank default by relating incidences of bank closure to variables thought to influence bank condition and performance. Equation 9 presents the default model in general form where the natural logarithm of the probability bank k fails-to-probability of not failing between periods t and t + 1 (dependent variable) is assumed to be a function of the factors thought to influence failure risk for bank k, Xk(explanatory variables), weighted by the coefficients, βi,tplus a constant term, αt. The residual term, εt, represents model estimation error, i.e., the difference between the model prediction of the dependent variable and actual outcomes.

In equation 9 the dependent variable range is between zero and one, hence, binary logistic regressionis an appropriate technique for model estimation. For predictive models one typically uses explanatory variable values from a period prior to failure, here t, and failures may occur any time between period t and t + 1, e.g., the following year.

Developing a Deposit Insurance Scheme: Analysis Tools (2)

The estimated probability of failure for bank k can be derived from equation 9, as shown by equation 10, where and represent estimated values for default model parameters:

Developing a Deposit Insurance Scheme: Analysis Tools (3)

In practice, the logistic default model is estimated by relating observed incidences of bank default and non-default, measured by a binary indicator (1, 0) respectively, to the explanatory variables. Statistical packages typically use the method of maximum likelihood to estimate the model and assume the model form shown by equation 9.

To implement the logit model of bank default risk one estimates the model (equation 9) using data from an initial time span, e.g., December 2007 to December 2008, wherein explanatory variables are measured as of December 2007 and failures occur during 2008. Next using estimated values for the model parameters, and explanatory variables’ values from a subsequent period, e.g., December 2008, one can use equation 9 to estimate bank failure probabilities for 2009.

DATA OUTLIERS

Outliers in model estimation data, i.e., extremely small and large values for explanatory variables, can bias model results as the regression will try to find the best fitting line [hyperplane] to explain the data. To avoid this bias, we removed extreme outliers. We consider observations where one or more model inputs are approximately 3 times or more larger than the 99th percentile values for variables and approximately 3 times or more smaller than the 1st percentile value for variables to be anomalous. We delete all anomalous observations as part of data preparation; this resulted in the deletion of 2.4 percent of the 2007–2008 observations on banks.

Table 4 presents an example of an estimated logit default model. In table 4 the input for the dependent variable is the binary indicator of default during 2009. The explanatory variables in table 4, measuredas of December 2008, are values for bank loan-loss provisions (PROV1), net operating income (NOI), noncore funds dependent (NONCORE), restructured loans, past due and nonaccrual loans (TOTALBAD), and equity capital (capital), all measured as a percentage of bank assets. Coefficient estimates in table 4 indicate that default risk is related to asset quality, as shown by the positive coefficients on total “bad” assets and loan loss provisions. Default risk is negatively related to starting period capitalization and net operating income. Finally, default risk is positively related to noncore funds dependence. These results are intuitively appealing and consistent with previous studies on bank default risk.

TABLE 4: LOGIT BANK DEFAULT EXAMPLE MODEL

Developing a Deposit Insurance Scheme: Analysis Tools (4)

ATTACHMENTS FOR FAILURE MODEL

Attached are the data, R computer code for logistic default model estimation, and an excel file that implements model in a transparent manner:

Developing a Deposit Insurance Scheme: Analysis Tools (5)

MACHINE LEARNING MODELS OF DEFAULT

Logistic regression is a traditional technique for modelling bank default, however, in recent years banks have used other approaches for modelling credit default that can be applied to bank default— artificial neural networks, gradient boosting and random forests. Collectively, these modelling techniques are known as machine learning models (ML) because the functional form of the model is driven by the model input data. ML techniques frequently outperform regression models in terms of predictive accuracy and deposit insurers should consider ML when modelling bank default.

To illustrate the predictive accuracy of ML techniques we developed ML models of bank default and compare model predictive accuracy for out-of-sample predictions for 2010 failures using December 2009 input data and a model fitted using December 2008 input data and 2009 failures. Rank order accuracy measures are the most appropriate measures of model performance for our purposes. We use the area under the receiver operating characteristic curve (ROC curve), also known as the AUC measure, to gauge model performance. To obtain the AUC one begins by measuring the model’s ability to predict a binary event, such as failing within a certain year or not. The predictive model output isa probability bound between zero and one, hence, one can measure model predictive accuracy at alternative thresholds for classifying probabilities as indicating that an event (non-event) will occur, e.g., if the probability of an event is greater than 60 percent the probability is classified as “the event will occur”. Next, graph the model’s ability to predict events correctly at alternative thresholds for classifying event probabilities as an event prediction, comparing the true positive rate to the false positive rate. This is illustrated in figure 2. The AUC is the area under the solid-red curve comparing the true positive rate to the false positive rate at alternative classification thresholds.82The dashed- gray diagonal line in figure 2 has an AUC of 50 percent, representing a random guess. As a rule of thumb, the larger the AUC the better the prediction. There is no standard for what AUC values indicate an acceptable predictive model other than the comparison to random guesses, however, AUC’s in the 90 percent range have been achieved by external fraud prediction models for bank credit-cards.

Developing a Deposit Insurance Scheme: Analysis Tools (6)

Table 5 presents the AUC statistics for out-of-sample 2010 failure predictions for the 4 models tested. In this case all models performed similarly in terms of predictive accuracy. Interestingly, the logit model performed as well as all ML models tested. One possible reason for this result is that there are no significant nonlinear relationships that ML approaches can model and logistic regression cannot model. Further, we removed highly collinear explanatory variables, data outliers and normalized the data using min-max normalization as part of data preparation. These preparation steps improve the performance of regression and ML models, putting all approaches on a more even footing.

TABLE 5. FAILURE MODEL 2010 OUT-OF-SAMPLE PREDICTIVE ACCURACY

Developing a Deposit Insurance Scheme: Analysis Tools (7)

ATTACHMENTS FOR ML FAILURE MODELS

Attached is the R computer code for estimating the 4 ML models using min0max scaling of data. The input data is the same as that used by the unscaled logit model (previously attached):

Developing a Deposit Insurance Scheme: Analysis Tools (8)

NON-MODELLED DEFAULT RISK ESTIMATES

Jurisdictions need not rely on forecasts of bank default risk from statistical and ML approaches, especially when bank defaults have been infrequent historically. In these circ*mstances it is appropriate to use information on banks’ credit risk ratings assigned by credit rating agencies and the historical default rate of banks by credit rating. Ideally, the historical default rates by credit ratingwould be from the jurisdiction’s region and/or similar regions in terms of economic and political regimes. Since default rates vary with underlying economic conditions, the jurisdiction should select default data from a period where economic conditions best reflect the current risk exposures of banks it is insuring. Table 6 presents the current, average long-term and crisis period one year bank default rates from a 2013 Fitch study on global bank defaults. In table 5 we see that bank default rates increased dramatically for the 2008 – 2009 period.

TABLE 6. FITCH CREDIT RATINGS AND GLOBAL BANK FAILURE RATES

Developing a Deposit Insurance Scheme: Analysis Tools (9)

“Note that the average bank failure rates in table [5] do not always increase the poorer the Fitch individual ratings (IRs). Specifically, table [5] shows that average failure rate for A-rated banks exceeds that for B- rated banks for the 1990–2011 period and the average failure rates for A, B, and C-rated banks exceeds that for D-rated banks for the 2008–2009 period. The non-monotonic relationship between Fitch IRs and bank failure rates is not necessarily inconsistent with the definitions of IRs and bank failures in the Fitch 2013 study. Recall that Fitch classified some banks as failed due to reliance on extraordinary support even though these same banks made debt payments.”

A limitation of this approach to estimating bank default risk is that typically only the largest banks in a jurisdiction issue publicly traded debt and have been assigned credit ratings. To address this limitation jurisdictions can use bank safety and soundness ratings assigned by bank supervisors as indicators of default risk. Supervisory ratings of bank safety and soundness are based on bank capital adequacy, asset quality, management, earnings, liquidity and sensitivity to market prices, generally referred to as bank CAMELS attributes. The historical default rates of banks by CAMELS rating can be applied in much the same manner as default rates by credit rating.

13.2-RISK RELATED PREMIUMS

A jurisdiction’s deposit insurance premium system should be related to the default risk of insured banks and losses to the deposit insurer given default. In this way, insurance premiums might be set to cover expected insurance losses. As expressed in equation 4, insurer expected losses are the product of PD, LGD and EAD for banks. Deposit insurers cannot know when banks might fail, a priori, and must work with the uncertainties of banking markets when establishing insurance premium rates. We discuss approaches for setting deposit insurance premium rates in the following section.

13.3 SETTING DEPOSIT INSURANCE PREMIUM RATES

There are five important questions when setting risk-related deposit insurance premiums. First, what will be the basis for risk ranking banks? Second, what are the expected deposit insurance losses each period? Third, what time horizon will be used for insurer loss measurement? Fourth, how will premiums vary for banks over this time horizon? Finally, what is the insurer’s objective when setting premium rates? We begin our discussion with the last and most important question.

DEPOSIT INSURERS’ OBJECTIVES WHEN SETTING PREMIUMS

For private sector consumer and commercial insurers, the answer to the question of insurer objective(s) is maximize profit. This implies premium rates include coverage of expected insurance losses plus coverage of the costs of providing insurance and an additional premium for the risks insurer equity shareholders bear.

For public sector deposit insurers there are no equity shareholders, hence, risk premiums for shareholders is a moot point. As a consequence, public sector deposit insurers focus on coverage of expected insurance losses. Expected insurance losses for a bank are the product of its probability of default and failure-resolution costs (insurer losses) given default for the same period.

ACTUARIALLY FAIR PREMIUMS

Equation 11 shows the criteria for setting actuarially fair deposit insurance premiums for an individual bank i. Actuarially fair premiums are set such that the expected premium revenue equals the expected insurance payments. In the left hand side of equation 11 we measure the present discounted value of expected insurance premiums between periods 1 and N; PDi,tis the probability of default for bank i in period t, hence 1 – PD i,tis the probability the bank will not fail in period t, ƴi,t is the premium rate for the bank and IDi,tis its insured deposits. To account for the time value of money, each revenue (expense) stream is discounted by an appropriate discount rate, rt. The right hand side of equation 11 measures the insurer’s expected losses between periods 1 and N from bank i; δi,tis the insurer’s loss rate on insured deposits given bank default.

Developing a Deposit Insurance Scheme: Analysis Tools (10)

Equation 12 shows the condition for actuarially fair premiums for all insured banks by summing equation 11 over all banks i = 1 to I:

Developing a Deposit Insurance Scheme: Analysis Tools (11)

In equations 11 and 12 the periodic premium rates, ƴi,t , vary across banks and time, i.e., are risk- related premiums.

In practice, private sector consumer and commercial insurance premiums are not actuarially fair because insurers require an additional premium for bearing insurance risk, and need to cover the administrative and fixed costs of providing insurance. Further, moral hazard on the part of the insured individual (organization) can lead to mispriced insurance premiums that are not actuarially fair. Public sector deposit insurers do not require an additional premium for bearing insurance risk but do incur administrative and other operating costs for providing insurance and do face moral hazard.

OPERATING EXPENSE COVERAGE

Equations 11 and 12 omit the operating costs of providing deposit insurance to banks. These latter costs are operating expenses for insurance-related activities, i.e., expenses for personnel, property, equipment and other insurance-related operating expenses. There is an expectation that more insurer resources are devoted to large and complex banks than to smaller non-complex banks. Forexample, the amount of time and resources devoted to bank risk assessments is likely to increase with bank size and complexity. Further, administrative and legal costs are likely to be higher for large and complex banks than for small, non-complex banks. Depending on the insurer’s ability to allocate operating expenses, these expenses can be associated with individual banks. At the least, the insurer can consider allocating operating expenses to banks on a pro-rata basis based on each bank’s share of banking market insured deposits. Recovery of operating expenses can be treated as an “add-on” to revenue neutral premiums. For example, table 7 presents an example of a risk-related premium rate of 24 basis points that might apply to banks in the highest risk category and 8 basis points for banks in the lowest risk category. The insurer might add a 5 basis point charge for regular operating expenses to all banks. For convenience, we assume the base to which premium rates apply is insured deposits.

TABLE 7. HYPOTHETICAL ANNUAL PREMIUM RATE SCHEDULE

Developing a Deposit Insurance Scheme: Analysis Tools (12)

While risk-related premiums rise and fall with banking market stress, regular operating expenses are likely to be relatively constant over time. To avoid undue burden on banks insurers could choose to recover operating expenses from returns on the investment portfolio. This latter approach is reasonable once the insurer has reached its optimal or “target” capital level, which we discuss next.

13.4 METHODOLOGIES FOR DETERMINING THE TARGET FUND

Once the insurer has quantified its risk tolerance, economic conditions and loss period, the process of determining the target fund is relatively straightforward. Conceptually, the insurer needs to determine the level of losses it is willing to occur over a period of time under certain conditions and the likelihood those losses would be incurred; this is essentially value-at-risk (VaR) modelling. The result can be displayed as an empirical frequency distribution of insurer losses (histogram). There are a number of ways the insurer could estimate this loss distribution:

  • Loss Distribution Approach
  • Credit Portfolio Approach
  • Risk Aggregation Model (Copula)
  • Stress Scenario Approach

We next discuss each approach for estimating an insurer’s frequency distribution of losses. In the discussion of insurer loss distribution, we focus on the methodology and provide detailed examples of model implementation in the analysis tools section of the toolkit.

In the loss distribution approach (LDA) the insurer uses historical information on deposit insurance losses for their jurisdiction and analyzes the distribution of loss levels and loss frequencies (LDA histogram). The LDA histogram for deposit insurers is typically skewed to the left (leptokurtic), with the frequencies of losses declining as loss levels increase. Given a LDA histogram, the insurer can set the target fund to be that which will absorb losses up to a chosen confidence level, as explained in the previous hypothetical example (figure 1).

The credit portfolio approach is based on the work of Merton [21] and Vasicek [32, 33, 34]. Merton models a firm financed with a single bond and equity. Merton argues bond holders have a call option on the firm’s assets that can be exercised when the market value of its assets is less than that of liabilities (the bond), implying the firm fails. This situation is modelled in terms of asset returns; failure occurs when firm losses (negative returns) fall below a threshold in which the market value of a firm’s assets are reduced by the loss and are now less in total value than that of liabilities. Vasicek broadens the Merton model to apply to a portfolio of loans. The Vasicek model adds the drivers of asset returns to a portfolio loss model. According to Vasicek asset returns are assumed to be determined by two factors—idiosyncratic and systematic. Idiosyncratic asset return drivers are, as the name implies, specific to each firm, e.g., the retirement of a chief executive of ficer (CEO). Systematic return drivers are common to all firms, i.e., impact all firms. While there can be multiple systematic risk drivers, Gordy [28] shows these systematic risk factors can be incorporated into a single, asymptotic risk factor or common systematic return driver.

The credit portfolio approach to determining insurer losses combines the Merton and Vasicek models into what is commonly known as the Merton-Vasicek model. In this model, expected bank failure losses of the deposit insurer are modeled as the product of banks’ PD, LGD and EAD as discussed previously. Equation 13 repeats that approach to insurer loss estimation, showing expected losses to an insurer from the failure of bank j:

Developing a Deposit Insurance Scheme: Analysis Tools (13)

In the Merton-Vasicek model asset returns are determined by the idiosyncratic risk factor, Ej, and common risk factor, X. A single period return on bank j’s assets, Rj, is a linear combination of idiosyncratic and systematic risk factors (return drivers), where wjis the weight placed on the systematic factor and nj is the eight placed on the idiosyncratic factor, as shown in equation 14:

Developing a Deposit Insurance Scheme: Analysis Tools (14)

In the Merton-Vasicek model banks fail when their asset return is less than or equal to a threshold value specified by the modeler, C (equation 15).

Developing a Deposit Insurance Scheme: Analysis Tools (15)

It can be shown that the weights on risk drivers can be stated in terms of the correlation coefficient between any two bank’s asset returns, ρi,j, as shown in equation 16:

Developing a Deposit Insurance Scheme: Analysis Tools (16)

To implement the Merton-Vasicek model one can use Monte Carlo simulations to generate bank asset returns and then determine the frequency of bank failures by comparing returns to a threshold return. The return risk factors can be generated from two standard normal variable distributions whose mean and standard deviation are calibrated to the sovereign economy. Next, an estimate of bank net income correlation is used to provide weights for the risk factors and a weighted sum of risk factors. i.e., bank asset returns. The random samples of returns are compared to the threshold return in order to determine whether a bank fails or not. The return threshold can be set at a level commensurate with the industry bank failure rate experienced during a chosen time period. This is straightforward once the risk factors and return have been converted to standard normal variables, i.e., simply pick a return threshold where the probability of returns of that level or less is the same as the chosen bank failure rate. Historical data on LGD and current data of EAD can be used to complete the expectedinsurance loss model. O’Keefe and Ufier [23, 24] provide examples of the application of the credit portfolio approach to target fund estimation for the Nigeria Deposit Insurance Corporation and FDIC, respectively.

An advantage of the credit portfolio approach compared to the LDA is the ability to explicitly model the correlation of asset returns. Further, the choice of an asset return threshold for failure events allows the modeler to calibrate the model to historical and/or future expected failure rates. Given the model’s features, the Merton-Vasicek model can be applied in stress tests that provide a range of potential insurer loss distributions under different assumptions about underlying banking industry conditions.

The risk aggregation approach (copula) while often applied to private-sector financial institutions has, with the exception of Kusaya, O’Keefe and Ufier [19], not been applied to public-sector deposit insurers to the best of our knowledge. In the risk aggregation approach, one first determines the sources of revenues and expenses for the deposit insurer—e.g., insurance premium and investment revenue, as well as operating and bank-failure resolution expenses. These revenues and expenses determine the insurer’s net income (loss), which in turn, augments (depletes) the insurer’s capital. In this approach to modelling optimal capital (target fund) one uses copulas to combine the income and expense streams into a multivariate distribution. Monte Carlo simulations are used to make random draws from the insurer’s multivariate income and expense distribution are summed to estimate the probability distribution of net income (loss) for the insurer. This income (loss) distribution can be used to determine the target fund level for the insurer in a manner similar to that explained in the previous example. The copula approach is used by Kusaya, O’Keefe and Ufier [19] to determine the target fund level for the FDIC in a case study.

Developing a Deposit Insurance Scheme: Analysis Tools (17)

Developing a Deposit Insurance Scheme: Analysis Tools (18)

Developing a Deposit Insurance Scheme: Analysis Tools (19)

To implement the copula approach to target fund estimation one finds the best fitting univariate distribution for each key revenue and expense category; this can be done by testing the similarity of historical distributions of annual revenues and expenses, by categories, to theoretical distributions and conducting distribution fit tests. Next, the revenue and expense data are used to find the best fitting copula function. In the final step, a multivariate copula is calibrated to the univariate distributions, e.g., sample mean and standard deviation are used to calibrate a normal distribution for a revenue category, and the copula function is calibrated to the best fitting copula based on historical data for key revenue and expense categories.

Since there may be gaps in the historical data for some portions of univariate distributions a Monte Carlo simulation is used to make random draws from the calibrated copula function for each revenue and expense category. Revenues and expenses from each random draw from the copula are summed to provide a random draw of net income. Finally, draws of net income from say, 50,000 simulations, are used to form an empirical frequency distribution of net income. The frequency distribution of annual net income (histogram) will range from positive to negative values. The insurer decides on the level of annual losses it wishes to absorb, e.g., -$129 billion, which will have a corresponding confidence level, e.g., 99.97 percent of annual losses are -$129 billion or less.

The stress scenario approach to determining the target fund is a pragmatic approach. Rather than modelling the insurer’s loss distribution from historical data (LDA) or relying on Monte Carlosimulations based on assumed probability distributions (Merton-Vasicek and Copula Approaches) the stress scenario approach uses an approach that does not rely on statistical probability distributions. Rather it asks whether the deposit insurer has sufficient capital to withstand the losses that will result from the failure of banks in response to a specific economic scenario. For example, a scenario mighttest whether a deposit insurer can absorb the cost of a number and type of bank failures the scenario might generate, (e.g., the failure of 20 small banks and 3 medium size banks within a 3-year period). The specifications for small and medium size banks would be part of the scenario information, as well as the expected failure-resolutions costs for banks. The stress scenario approach is useful in jurisdictions where few, if any, bank closings have occurred. In that situation one lacks sufficient information to model the components of insurer losses from bank failures, hence, all one is left to work with is assumptions about a bank’s total failure costs. The stress scenario approach can use “worst-case” scenarios to get around the thorny questions about the drivers of failure-resolution costs. For example, in a worst-case scenario one can estimate failure-resolution costs as the value of insured deposits of the sample of small and medium size banks used by the scenario, assuming zero recoveries from failed-bank receiverships. The stress-scenario approach offers a sort of transparency in that insurer losses can be traced to a specific event and set of assumptions that are easy to understand and communicate. The approach, however, makes strong assumptions in scenario design.

13.4.1 LOSS DISTRIBUTION APPROACH

As discussed in section 7.3 given a history of insurance losses the insurer can determine the empirical frequency distribution (histogram) of annual losses and determine a target fund level sufficient to cover a chosen range of losses, e.g., $0 to $25 billion. The Excel workbook attached below contains data on the FDIC’s annual losses from bank failures between 1984 and 2019. These data are used to find the frequency distribution of losses and a target fund associated with a 97 percent confidence level of funding losses.

Developing a Deposit Insurance Scheme: Analysis Tools (20)

13.4.2 MERTON-VASICEK APPROACH

O’Keefe and Ufier [23, 24] develop Monte Carlo simulation models that implement a Merton-Vasicekcredit risk model for deposit insurers in Nigeria and the United States, respectively. The models areused to estimate the target fund ratio for these insurers. An Excel file that implements the target fundratio model for the United States FDIC is attached below. Users should be familiar with Excel VisualBasic code.

Developing a Deposit Insurance Scheme: Analysis Tools (21)

13.4.3 COPULA APPROACH

Kusaya, O’Keefe and Ufier [19] develop a risk aggregation model that combines the annual revenues and expenses of the United States FDIC using a copula wherein the probability distributions of revenues and expenses have been calibrated to different historical periods. The copula model produces a probability distribution for FDIC annual net income that include the possibility of high expenses for deposit insurance outlays. Kusaya, O’Keefe and Ufier [19] also apply insurer ruin theory to the FDIC to estimate the probability of insurer insolvency under alternative economic conditions and insurer approaches for achieving a target fund. As far as we know Kusaya, O’Keefe and Ufier are the first to apply copula risk measures and ruin theory, as developed by actuarial science for consumer and commercial insurers, to a bank deposit insurer.

ATTACHMENTS FOR COPULA APPROACH

The following attachments are the R code and data used by Kusaya, O’Keefe and Ufier [19] to estimate the FDIC’s target fund and probability of ruin.

Developing a Deposit Insurance Scheme: Analysis Tools (22)

13.4.4 STRESS SCENARIO APPROACH

Absent adequate data on historical bank failures and insurance losses that could be used in the targetfund models discussed previously—loss distribution approach, Merton-Vasicek Model, and CopulaModel—deposit insurers can use a hypothetical stress scenario approach as a supplement to theircurrent approach for setting the target fund ratio. There are numerous methods for developinghypothetical stress scenarios:

  • Top-level, assumption-based approaches that use available historical data on
    the insurer’s losses and current loss exposure. For example, Seelig, et al. [27]
    recommend an approach for determining the target fund ratio to the Philippine
    Deposit Insurance Corporation (PDIC) that involves three steps:

Developing a Deposit Insurance Scheme: Analysis Tools (23)

This top-level approach to setting the target fund relies on aggregate data for insurance losses over a historical period, and assumed losses on an arbitrarily chosen large bank that is assumed to fail. There is considerable uncertainty in the insurance loss rate assumed for the large bank failure absent historical experience with large bank failures, however.

  • Alternatives to top-level, assumption-based stress scenarios are bank-level, simulation-based stress scenario approaches for projecting bank failures and insurance losses. These latter stress scenarios can be applied to credit, operational, market or any other type of risk the insurer believes is most relevant to its current risk profile. Examples of bank-level simulation-based stress scenarios are:

Developing a Deposit Insurance Scheme: Analysis Tools (24)

Developing a Deposit Insurance Scheme: Analysis Tools (25)

  • Attachments for Stress-scenario Models:

Developing a Deposit Insurance Scheme: Analysis Tools (26)

Developing a Deposit Insurance Scheme: Analysis Tools (2024)
Top Articles
Latest Posts
Article information

Author: Neely Ledner

Last Updated:

Views: 6448

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Neely Ledner

Birthday: 1998-06-09

Address: 443 Barrows Terrace, New Jodyberg, CO 57462-5329

Phone: +2433516856029

Job: Central Legal Facilitator

Hobby: Backpacking, Jogging, Magic, Driving, Macrame, Embroidery, Foraging

Introduction: My name is Neely Ledner, I am a bright, determined, beautiful, adventurous, adventurous, spotless, calm person who loves writing and wants to share my knowledge and understanding with you.