JB&F example
Sample code for computing the figures in Table 4 in:
Embrechts, P., Puccetti, G. and L. Rüschendorf (2013). Model uncertainty and VaR aggregation. J. Bank. Financ., 37(8), 2750-2764. paper - post-print (updated) version
Disclaimer
The sample code below uses a randomized starting matrix. This is not really necessary (can be dropped), time consuming and will produce slightly different results at each run of the algorithm); see the remark "Randomise or not to randomise" in:
Aas, K. and G. Puccetti (2014). Bounds for total economic capital: the DNB case study. Extremes, 17(4), 693-715 paper SSRN.
#Algorithm in R to compute the WORST possible VaR range for the sum of d random losses identically distributed like F; see Table 4 in [EPR13] for F=Pareto.
# INPUT PARAMETERS
# alpha = confidence level at which VaR is computed
alpha=0.999
# N = cardinality of the support of the approximation of EACH of the marginal distribution (the larger N, the more accurate, the more time expensive the algorithm)
N=10^5
# d = dimensionality of the risk portfolio
d=8
# Q = quantile function of the marginal distribution F (this can be changed according to the end-user needs).
#PARETO DISTRIBUTION
omega=2;
Q=function(x){(1-x)^(-1/omega)-1};
#LOGNORMAL DISTRIBUTION
#Q=function(x){qlnorm(x, meanlog = 6.474105, sdlog = 0.7213475, lower.tail = TRUE, log.p = FALSE)}
#EXPONENTIAL DISTRIBUTION
#Q=function(x){qexp(x,rate=expmu,log=FALSE,lower.tail=TRUE)}
# CREATION OF THE MATRICES LX AND UX
# X=auxiliary matrix
X=matrix(rep(Q(seq(alpha,1,length=N+1)),d),nrow=N+1)
#LX = matrix representing an approximation of the marginal distributions from below
LX=X[1:N,]
#UX = matrix representing an approximation of the marginal distributions from above
UX=X[2:(N+1),]
#randomization of LX and UX (this is not really necessary; see the Disclaimer above)
for(j in 1:d){
LX[,j]<-LX[rank(runif(N)),j]
UX[,j]<-UX[rank(runif(N)),j]}
#REARRANGEMENT PROCEDURE FOR LX AND UX
# epsilon = desired accuracy
epsilon=0.001
# VaR = recursive value for the worst possible VaR
LVaR=Inf
# delta = difference between two consecutive estimates
delta=Inf
# iteratively rearrange each column of the matrix LX until the difference between
# two consecutive estimates of the worst VaR is less than epsilon
while(delta >epsilon ){
#LVaRtemp = auxiliary variable
LVaRtemp=LVaR
for(j in 1:d) {
LX[,j]=sort(LX[,j],decreasing=TRUE)[rank(rowSums(LX[,c(1:d)[-j]]))] }
LVaR= min(rowSums(LX))
delta=abs(LVaR-LVaRtemp)
}
# iteratively rearrange each column of the matrix UX until the difference between
# two consecutive estimates of the worst VaR is less than epsilon
UVaR=Inf
delta2=Inf
while(delta2 >epsilon ){
UVaRtemp=UVaR
for(j in 1:d) {
UX[,j]=sort(UX[,j],decreasing=TRUE)[rank(rowSums(UX[,c(1:d)[-j]]))] }
UVaR= min(rowSums(UX))
delta2=abs(UVaR-UVaRtemp)
}
# OUTPUT
#WORST VAR is the interval [LVaR,UVaR].
print(LVaR)
print(UVaR)
#Algorithm in R to compute the BEST possible VaR range for the sum of d random losses identically distributed like F; see Table 4 in [EPR13] for the F=Pareto.
# INPUT PARAMETERS
# alpha = confidence level at which VaR is computed
alpha=0.999
# N = cardinality of the support of the approximation of EACH of the marginal distribution (the larger N, the more accurate, the more time expensive the algorithm)
N=10^5
# d = dimensionality of the risk portfolio
d=8
# Q = quantile function of the marginal distribution F (this can be changed according to the end-user needs).
#PARETO DISTRIBUTION
omega=2;
Q=function(x){(1-x)^(-1/omega)-1};
#LOGNORMAL DISTRIBUTION
#Q=function(x){qlnorm(x, meanlog = 6.474105, sdlog = 0.7213475, lower.tail = TRUE, log.p = FALSE)}
#EXPONENTIAL DISTRIBUTION
#Q=function(x){qexp(x,rate=expmu,log=FALSE,lower.tail=TRUE)}
# CREATION OF THE MATRICES LX AND UX
# X=auxiliary matrix
X=matrix(rep(Q(seq(0,alpha,length=N+1)),d),nrow=N+1)
#LX = matrix representing an approximation of the marginal distributions from below
LX=X[1:N,]
#UX = matrix representing an approximation of the marginal distributions from above
UX=X[2:(N+1),]
#randomization of LX and UX (this is not really necessary; see the Disclaimer above)
for(j in 1:d){
LX[,j]<-LX[rank(runif(N)),j]
UX[,j]<-UX[rank(runif(N)),j]}
#REARRANGEMENT PROCEDURE FOR LX AND UX
# epsilon = desired accuracy
epsilon=0.001
# VaR = recursive value for the best possible VaR
LVaR=Inf
# delta = difference between two consecutive estimates
delta=Inf
# iteratively rearrange each column of the matrix LX until the difference between
# two consecutive estimates of the best VaR is less than epsilon
while(delta >epsilon ){
#LVaRtemp = auxiliary variable
LVaRtemp=LVaR
for(j in 1:d) {
LX[,j]=sort(LX[,j],decreasing=TRUE)[rank(rowSums(LX[,c(1:d)[-j]]))] }
LVaR= max(rowSums(LX))
delta=abs(LVaR-LVaRtemp)
}
# iteratively rearrange each column of the matrix UX until the difference between
# two consecutive estimates of the best VaR is less than epsilon
UVaR=Inf
delta2=Inf
while(delta2 >epsilon ){
UVaRtemp=UVaR
for(j in 1:d) {
UX[,j]=sort(UX[,j],decreasing=TRUE)[rank(rowSums(UX[,c(1:d)[-j]]))] }
UVaR= max(rowSums(UX))
delta2=abs(UVaR-UVaRtemp)
}
# OUTPUT
#BEST VAR is the interval [LVaR,UVaR].
print(LVaR)
print(UVaR)