Importance Sampling in the Presence of PD-LGD Correlation
Abstract
:1. Introduction
Problem Formulation and Related Literature
2. Assumptions, Notation and Terminology
2.1. Large Portfolios and the Region of Interest
2.2. Systematic Risk Factors
2.3. Individual Losses
2.4. Conditional Tail Probabilities
2.5. Conditional Densities
3. Proposed Algorithm
Algorithm 1 Standard Monte Carlo Algorithm for Estimating |
|
3.1. General Principles
Algorithm 2 IS Algorithm for Estimating |
|
3.2. Identifying the Ideal IS Densities
3.3. Approximating the Ideal IS Densities
3.3.1. Systematic Risk Factors
3.3.2. Individual Losses
3.4. Summary and Intuition
Algorithm 3 Proposed IS Algorithm for Estimating |
Otherwise, proceed as follows:
|
4. Practical Considerations
4.1. One- and Two-Stage Estimators
Algorithm 4 Proposed One-Stage IS Algorithm for Estimating |
|
4.2. Large First-Stage Weights
4.3. Large Rejection Constants
4.4. Computing
5. PD-LGD Correlation Framework
- Miu and Ozdemir (2006) allow for arbitrary systematic correlation ( unrestricted) and arbitrary idiosyncratic correlation ( unrestricted).
- Frye (2000) specifies for constants and . Potential loss takes values in . Its density has a point mass at zero and is proportional to a Gaussian density on . Since is not constrained to lie in the unit interval, this specification violates the assumptions made in Section 2.3;
- Witzany (2011) and Miu and Ozdemir (2006) both specify , where and denotes the cdf of the beta distribution with parameters a and b. Potential loss takes values in . It is a continuous variable and follows a beta distribution.
5.1. Computing
5.2. Computing and
5.3. Exploring the Parameter Space
- Generate the default probability P uniformly between 0% and 10%, and generate each of the correlations and uniformly between 0% and 50%;
- In the one-factor model, generate uniformly on , i.e., takes on the value or with equal probability. If we generate uniformly between 0% and 100%, and if we generate uniformly between and . This allows us to control the sign of , which we must do in order to ensure a positive relationship between default and potential loss. In the two-factor model we randomly generated uniformly on . If is positive, randomly generate uniformly on , otherwise randomly generate uniformly on ;
- We choose the transformation to ensure that (i) potential loss is beta distributed and (ii) there is a positive relationship between default and loss. The paramters a and b of the beta distribution are generated independently from an exponential distribution with unit mean. If we set and if we set , where is the cumulative distribution function for the beta distribution with parameters a and b. Note that under these restrictions, in the one-factor model the expected loss function is monotone decreasing.
- Generate the number of exposures randomly between 10 and 5000;
- In the one-factor model we generate the threshold x by setting , where q is uniformly distributed on . The LPA suggests that
6. Implementation
6.1. Selecting the IS Density for the Systematic Risk Factors
6.2. First Stage
6.2.1. Computing Parameters in the Two-Factor Model
6.2.2. Computing Parameters in the One-Factor Model
6.2.3. Trimming Large Weights
6.3. Second Stage
6.3.1. Approximating
6.3.2. Sampling Individual Losses
6.3.3. Efficiency of the Second Stage
7. Performance Evaluation
7.1. Statistical Accuracy
7.2. Computational Time
7.3. Overall Performance
8. Concluding Remarks
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A. Exponential Tilts and Large Deviations
Appendix A.1. Properties of k(θ)
Appendix A.2. Legendre Transform of k(θ)
Appendix A.3. Exponential Tilts
Appendix A.4. Behaviour of Xi, Conditioned on a Large Deviation
Appendix A.5. Approximate Behaviour of (X1 ,X2, …, XN), Conditioned on a Large Deviation
Appendix B. Important Exponential Families
Appendix B.1. Gaussian
Appendix B.2. Chi-Square Family
Appendix B.3. t Family
References
- Bickel, Peter J., and Kjell A. Doksum. 2001. Mathematical Statistics: Basic Ideas and Selected Topics, 2nd ed. Upper Saddle River: Prentice Hall, Volume 1. [Google Scholar]
- Chan, Joshua C.C., and Dirk P. Kroese. 2010. Efficient estimation of large portfolio loss probabilities in t-copula models. European Journal of Operational Research 205: 361–67. [Google Scholar]
- Chatterjee, Sourav, and Persi Diaconis. 2018. The sample size required in importance sampling. Annals of Applied Probability 28: 1099–135. [Google Scholar] [CrossRef] [Green Version]
- de Wit, Tim. 2016. Collateral Damage—Creating a Credit Loss Model Incorporating a Dependency between Defaults and LGDs. Master’s thesis, University of Twente, Enschede, The Netherlands. [Google Scholar]
- Deng, Shaojie, Kay Giesecke, and Tze Leung Lai. 2012. Sequential importance sampling and resampling for dynamic portfolio credit risk. Operations Research 60: 78–91. [Google Scholar] [CrossRef] [Green Version]
- Eckert, Johanna, Kevin Jakob, and Matthias Fischer. 2016. A credit portfolio framework under dependent risk parameters PD, LGD and EAD. Journal of Credit Risk 12: 97–119. [Google Scholar] [CrossRef]
- Frye, Jon. 2000. Collateral damage. Risk 13: 91–94. [Google Scholar]
- Frye, Jon, and Michael Jacobs Jr. 2012. Credit loss and systematic loss given default. Journal of Credit Risk 8: 109–140. [Google Scholar] [CrossRef] [Green Version]
- Glasserman, Paul, and Jingyi Li. 2005. Importance sampling for portfolio credit risk. Management Science 51: 1643–56. [Google Scholar] [CrossRef] [Green Version]
- Ionides, Edward L. 2008. Truncated importance sampling. Journal of Computational and Graphical Statistics 17: 295–311. [Google Scholar] [CrossRef] [Green Version]
- Jeon, Jong-June, Sunggon Kim, and Yonghee Lee. 2017. Portfolio credit risk model with extremal dependence of defaults and random recovery. Journal of Credit Risk 13: 1–31. [Google Scholar] [CrossRef] [Green Version]
- Kupiec, Paul H. 2008. A generalized single common factor model of portfolio credit risk. Journal of Derivatives 15: 25–40. [Google Scholar] [CrossRef] [Green Version]
- Miu, Peter, and Bogie Ozdemir. 2006. Basel requirements of downturn loss given default: Modeling and estimating probability of default and loss given default correlations. Journal of Credit Risk 2: 43–68. [Google Scholar] [CrossRef]
- Pykhtin, Michael. 2003. Unexpected recovery risk. Risk 16: 74–78. [Google Scholar]
- Scott, Alexandre, and Adam Metzler. 2015. A general importance sampling algorithm for estimating portfolio loss probabilities in linear factor models. Insurance: Mathematics and Economics 64: 279–93. [Google Scholar]
- Sen, Rahul. 2008. A multi-state Vasicek model for correlated default rate and loss severity. Risk 21: 94–100. [Google Scholar]
- Witzany, Jiří. 2011. A Two-Factor Model for PD and LGD Correlation. Working Paper. Available online: http://dx.doi.org/10.2139/ssrn.1476305 (accessed on 9 March 2020).
1 | Relative error is the preferred measure of accuracy for large deviation probabilities. If is an estimator of , its relative error is defined as , where denotes standard deviation. |
2 | In light of the almost sure limit in Equation (3), we have that converges to in distribution, which implies that Equation (6) is valid for all values of x such that . If is a continuous random variable (which it is in most cases of practical interest) then Equation (6) is satisfied for every value of x. |
3 | In the case of non-random LGD we have , where R is the known recovery rate on the exposure. |
4 | In the earliest stages of this project we focused directly on an IS density for and had difficulties identifying effective candidates. |
5 | |
6 | All calculations are carried out using Matlab 2018a on a 2015 MacBook Pro with 6.8 GHz Intel Core i7 processor and 16 GB (1600 MHz) of memory. Numerical integration is performed using the built-in integral function. |
7 | We use the Matlab function fzero for the root-finding. |
8 | In the one-factor model, a tractable approximation to the ideal density can be obtained by using the LDA of Equation (13) to approximate both probabilities appearing in Equation (15). The result is:
|
9 | Whether or not we adjust the variance of the systematic risk factor, the standard error of the resulting estimator is of the form , where depends on the model parameters and is easily estimated via simulation. Using 100 randomly selected parameter sets from the one-factor model, selected according to the procedure described in Section 5.3, we find that for the one-stage estimator , where denotes the value of assuming we only shift the mean of the systematic risk factor and do not adjust its variance and denotes the value when we do adjust variance. For probabilities in the range of interest, then, adjusting the variance of the systematic risk factor leads to an estimator that is nearly four times as efficient, in the sense that the sample size required to achieve a given degree of accuracy (as measured by standard error) is nearly four times larger if we do not adjust variance. |
10 | As discussed in Appendix B, the natural sufficient statistic here consists of the components of plus the components of . As such, in order to satisfy Equation (27) we must ensure that and , where denotes mean under the IS distribution. These conditions are clearly equivalent to Equations (41) and (42). |
One-Stage Algorithm | Two-Stage Algorithm | |
---|---|---|
One-Factor Model | ||
Two-Factor Model |
Average Run Times | |||||
---|---|---|---|---|---|
No IS | One-Stage IS | Two-Stage IS | |||
One Factor | 7.3 | 25.6 | 33.7 | 1.5 | 0.8 |
Two Factor | 7.4 | 39.0 | 55.5 | 14.3 | 8.9 |
No IS | One-Stage IS (Two-Stage IS) | ||||||||
---|---|---|---|---|---|---|---|---|---|
1% | 0.1% | 0.01% | 1% | 0.1% | 0.01% | ||||
6 | 60 | 600 | 1.2 (2.3) | 1.2 (2.3) | 1.3 (2.4) | ||||
24 | 240 | 2400 | 1.8 (2.8) | 1.9 (2.9) | 1.9 (2.9) | ||||
600 | 6000 | 60,000 | 20.0 (18.8) | 21.8 (19.6) | 23.8 (20.4) |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Metzler, A.; Scott, A. Importance Sampling in the Presence of PD-LGD Correlation. Risks 2020, 8, 25. https://doi.org/10.3390/risks8010025
Metzler A, Scott A. Importance Sampling in the Presence of PD-LGD Correlation. Risks. 2020; 8(1):25. https://doi.org/10.3390/risks8010025
Chicago/Turabian StyleMetzler, Adam, and Alexandre Scott. 2020. "Importance Sampling in the Presence of PD-LGD Correlation" Risks 8, no. 1: 25. https://doi.org/10.3390/risks8010025
APA StyleMetzler, A., & Scott, A. (2020). Importance Sampling in the Presence of PD-LGD Correlation. Risks, 8(1), 25. https://doi.org/10.3390/risks8010025