Optimal Economic Capital Allocation in Banking on the Basis of Decision Rights [1 ed.] 9783896447074, 9783896737076

As a regulatory consequence banks today have to provide more equity for the same risk exposure than before the financial

131 101 40MB

German Pages 216 Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Optimal Economic Capital Allocation in Banking on the Basis of Decision Rights [1 ed.]
 9783896447074, 9783896737076

Citation preview

Studienreihe der Stiftung Kreditwirtschaft an der Universität Hohenheim

Jan Müller

Optimal Economic Capital Allocation in Banking on the Basis of Decision Rights

Verlag Wissenschaft & Praxis

Optimal Economic Capital Allocation in Banking on the Basis of Decision Rights

Studienreihe der Stiftung Kreditwirtschaft an der Universität Hohenheim Herausgeber: Prof. Dr. Hans-Peter Burghof

Band 52

Jan Müller

Optimal Economic Capital Allocation in Banking on the Basis of Decision Rights

Verlag Wissenschaft & Praxis

Bibliografische Information der Deutschen Nationalbibliothek Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http://dnb.dnb.de abrufbar.

D100 ISBN 978-3-89673-707-6 © Verlag Wissenschaft & Praxis Dr. Brauner GmbH 2015 D-75447 Sternenfels, Nußbaumweg 6 Tel. +49 7045 930093 Fax +49 7045 930094 [email protected] www.verlagwp.de

Alle Rechte vorbehalten Das Werk einschließlich aller seiner Teile ist urheberrechtlich geschützt. Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages unzulässig und strafbar. Das gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikroverfilmungen und die Einspeicherung und Verarbeitung in elektronischen Systemen. Druck und Bindung: Esser printSolutions GmbH, Bretten

5

FOREWORD The Studienreihe Stiftung Kreditwirtschaft aims at offering banking and finance subjects from the University of Hohenheim’s research to interested expert readers. The publications are meant to promote the exchange of ideas between University and practice. The banking regulation today forces to provide more economic capital in order to increase the ability of banks to cover unexpected losses on their own, to guarantee their going concern and prevent bankruptcy with a high probability. In this context, banks regularly choose their existing overall portfolio as a starting point in order to determine their economic capital requirements. However, a strict risk-return-management perspective suggests the reverse procedure. In this case the available economic capital represents the starting point. The capital then undergoes an allocation among the bank’s business fields while maintaining a certain confidence level and maximizing the bank’s overall expected return. Finally, this procedure determines the business fields’ business volumes and thereby induces an overall bank management according to riskreturn aspects. Neither research nor practice so far provide clear and preferential overall bank management approaches of that type. This might be caused by the underlying problem’s comprehensiveness that immediately arises if the problem’s consideration correctly and necessarily applies a portfolio theoretical perspective. However, the pressure of the surging equity requirements on Banks’ profitability after the financial crises even increases the need for such integral overall bank management systems. The present work emphasizes the crucial points to be addressed in context with an optimal economic capital allocation. In doing so, the focus lies on the immanently important consideration of decision makers and their autonomous decision making’s implications. The work makes a valuable contribution to the understanding of the struggles banks face today on the field of overall bank management driven by a changing economic and regulatory environment. This volume represents a further contribution to our successful promotion of exchanging ideas between University and practice in highly relevant fields from banking and finance. Hohenheim, June 2015 Prof. Dr. Hans-Peter Burghof

7

CONTENTS FIGURES ............................................................................................................ 11 TABLES ............................................................................................................. 15 ALGORITHMS .................................................................................................. 17 1 INTRODUCTION ........................................................................................ 19 1.1 Problem and research question ............................................................... 19 1.2 Organization of the research ................................................................... 21 2 CORPORATE MANAGEMENT BY ECONOMIC CAPITAL ALLOCATION............................................................................................. 25 2.1 Properties of economic capital................................................................ 25 2.2 Required economic capital by downside risk measurement ................... 26 2.3 Corporate management by bank-wide VAR limit systems .................... 28 2.4 Economic capital allocation on the basis of risk adjusted performance measurement ...................................................................... 31 2.4.1

Introduction to risk adjusted Performance measures ...................... 31

2.4.2

Controversial benchmarking on the basis of hurdle rates ............... 32

2.4.3

Implications of limit addressees in the form of decision makers ... 33

2.5 Economic capital allocation as a situation of delegation by decision rights ....................................................................................................... 34 2.5.1

Implications for the risk management process ............................... 34

2.5.2

Costs of delegation by decision rights ............................................ 36

3 IMPLICATIONS OF RELATED FIELDS OF RESEARCH ...................... 41 3.1 Different situations of economic capital allocation ................................ 41 3.2 Risk contribution – a form of economic capital allocation ..................... 42 3.2.1

Risk contribution schemes .............................................................. 42

3.2.2

Particular approaches from the field of risk contribution ............... 44

3.3 Axiomatization of economic capital allocation ...................................... 46 3.3.1

Axiomatization of risk measures .................................................... 46

8 3.3.2

Transfer of the axiomatization framework to economic capital allocation ........................................................................................ 48

3.4 Risk assessment over time by dynamic risk measures ........................... 50 3.5 Economic capital allocation as a means of corporate management ........ 53 3.6 Portfolio optimization under a downside risk constraint ........................ 60 3.6.1

Approaches on the basis of traditional methods of optimization.... 60

3.6.2

Heuristic methods of optimization.................................................. 62

3.6.2.1 Categorization of the field of heuristic optimization .................. 62 3.6.2.2 Approaches on the basis of heuristic optimization methods....... 64 4 BASIC MODEL OF OPTIMAL ECONOMIC CAPITAL ALLOCATION............................................................................................. 69 4.1 Qualitative description of the model....................................................... 69 4.2 Determination of the underlying stochastic program ............................. 71 4.3 Valuation of the objective function on the basis of a trading simulation ............................................................................................... 74 4.3.1

Simulation of the stocks’ returns .................................................... 74

4.3.2

Simulation of the business units’ profits and losses ....................... 76

4.3.3

Simulation of the heterogeneous prospects of success of the business units .................................................................................. 78

4.4 Out-of-sample backtesting and the role of importance sampling ........... 80 5 HEURISTIC OPTIMIZATION OF RISK LIMIT SYSTEMS BY THRESHOLD ACCEPTING ....................................................................... 83 5.1 Visual proof of non-convexity by an exemplary model case ................. 83 5.2 Basic algorithm of threshold accepting .................................................. 87 5.3 Determination of start solutions.............................................................. 89 5.4 Neighborhood function ........................................................................... 94 5.4.1

Basic design of the neighborhood function .................................... 94

5.4.2

Generation of the transfer value ..................................................... 96

5.4.3

Monitoring of the constraints’ satisfaction ..................................... 99

5.5 Generation of the threshold sequence ................................................... 101 5.6 Parallelization of threshold accepting ................................................... 104 6 PARAMETERIZATION OF THRESHOLD ACCEPTING ...................... 107

9 6.1 Concept of successive parameterization in context with the present model .................................................................................................... 107 6.2 Effective combinations of thresholds and transfer values .................... 108 6.2.1

Simple parameterization by visual analysis .................................. 108

6.2.2

Comprehensive analysis on the basis of detailed grid structures....................................................................................... 112

6.2.3

Excursus on the impact of the transfer value generation on the parameterization ........................................................................... 115

6.3 Effective combinations of restarts and steps......................................... 119 6.3.1

Appropriate coverage of the solution space.................................. 119

6.3.2

Particular aspects of parallel computing ....................................... 121

6.4 Concluding remarks on the parameterization for different model cases ...................................................................................................... 123 7 SUPERIORITY OF OPTIMAL ECONOMIC CAPITAL ALLOCATION – THE INFORMED CENTRAL PLANNER .................. 125 7.1 Introduction to the benchmarking of allocation methods in case of an informed central planner .................................................................. 125 7.2 Benchmarking of allocation methods in case of an informed central planner .................................................................................................. 127 7.2.1

Allocation methods’ performances before the background of an arbitrary model bank ................................................................ 127

7.2.2

Precise benchmarking on the basis of particular model settings .. 132

7.2.2.1 Implementation of a level playing field .................................... 132 7.2.2.2 Impact of restrictions through minimum limits ........................ 135 7.2.2.3 Relevance of optimal allocation in case of less privately informed traders........................................................................ 138 7.2.2.4 Influence of higher degrees of diversification in the form of higher numbers of business units.............................................. 141 7.3 Discussion of the superiority of optimal economic capital allocation .............................................................................................. 143 8 UNINFORMED CENTRAL PLANNER – INFORMATION ON THE BASIS OF BAYESIAN LEARNING ........................................................ 147 8.1 Introduction to the case of an uninformed central planner ................... 147 8.2 Description of the Bayesian learning algorithm ................................... 148

10 8.3 Bayesian learning central planner in case of independently acting decision makers .................................................................................... 153 8.3.1

Benchmarking of allocation methods using perfect prior probabilities .................................................................................. 153

8.3.2

Benchmarking under adjusted prior probabilities for the anticipation of risk underestimation ............................................. 160

8.4 Influence of herding decision makers on optimal economic capital allocation .............................................................................................. 168 8.4.1

Herding and informational cascades in case of economic capital allocation ........................................................................... 168

8.4.2

Modeling of herding tendencies among the decision makers ....... 170

8.4.3

Benchmarking of allocation methods under herding decision makers........................................................................................... 176

8.5 Conclusions on optimal allocation before the background of an uninformed central planner ................................................................... 187 9 CONCLUSIONS ........................................................................................ 191 9.1 Summary of results ............................................................................... 191 9.2 Closing remarks on the model assumptions and suggested future research ................................................................................................. 194 APPENDIX....................................................................................................... 195 REFERENCES ................................................................................................. 207

11

FIGURES Figure 2.1: Figure 2.2: Figure 2.3: Figure 4.1: Figure 4.2: Figure 4.3:

Figure 5.1: Figure 5.2: Figure 5.3: Figure 5.4: Figure 5.5: Figure 6.1: Figure 6.2: Figure 6.3: Figure 6.4: Figure 6.5:

Figure 6.6:

Figure 6.7:

Extract of an exemplary VAR limit system .................................... 30 Economic capital allocation and risk management process............ 35 Qualitative outline of the adjustment cost function ........................ 38 Structure of the simulation of the business units’ profits and losses............................................................................................... 77 Distribution of the probabilities of success p on the basis of a beta PDF with α = 1, β = 9 and on the interval [0.5, 1] .................. 79 Exemplary illustration of importance sampling (IS) in the form of scaling on the basis of a histogram (black with, dashed without IS) ...................................................................................... 81 Extract from an exemplary solution space surface ......................... 86 Outline of the neighborhood function N(vlc) .................................. 95 Exemplary sequence of transfer values tr for exp = 2 .................... 97 Exemplary empirical distribution of deltas incl. threshold sequence τ ..................................................................................... 102 Applied structure of parallel computing ....................................... 104 Choice of potential trinit-pτ combinations by a 2x2-grid ............... 109 Visualization of the search behavior of trinit-pτ combinations on the basis of single restarts ........................................................ 110 Potential refinement of the search for appropriate trinit-pτ combinations................................................................................. 111 Investigation of 400 different trinit-pτ combinations each undergoing 60 restarts .................................................................. 112 The best out of 60 restarts under trinit = 300 and pτ= 0.75 representing the current example’s most promising parameterization (achieving µbank = 157.67) ................................. 113 Empirical distributions of µbank for 60 restarts each using the different trinit-pτ combinations 1 - 5 from figure 6.2 and figure 6.5....................................................................................... 114 Investigation of µbank of 400 different trinit-pτ combinations each undergoing 60 restarts using a randomized (upper) and a fix transfer value (lower).................................................................... 116

12 Figure 6.8: Empirical distributions of the best solutions for µbank out of 60 restarts concerning the 400 different trinit-pτ combinations using a decreasing / randomized, randomized and fix transfer value according to figure 6.4 and the figure 6.7....................................................................................... 117 Figure 6.9: Empirical distributions of the solutions µbank of 400 restarts using a decreasing / randomized, randomized and fix transfer value and the respective transfer value method’s optimal trinit-pτ combination according to the figure 6.4 and the figure 6.7....................................................................................... 118 Figure 6.10: Empirical distributions of the restarts’ solutions µbank according to the parameterizations from table 6.1........................ 120 Figure 7.1: Limit allocations according to the TA, expected return, uniform and random method using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k, arranged according to the business units’ in-sample expected return................................................... 128 Figure 7.2: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k ................................................................... 131 Figure 7.3: Limit allocations according to the TA, expected return, uniform and random method using ec = 662 , cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k, arranged according to the business units’ in-sample return expectations ............................................. 133 Figure 7.4: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 662, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k ................................................................... 134 Figure 7.5: Limit allocations according to the TA, expected return, uniform and random, method using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k, arranged according to the business units’ in-sample expected returns................................... 136 Figure 7.6: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k.................................................. 137 Figure 7.7: Adjustment of the skill levels’ distribution from p⌀ = 0.55 to p⌀ = 0.51........................................................................................ 138 Figure 7.8: Limit allocations according to the TA, expected return, uniform and random method using ec = 901, cbank = 150k, nunits = 50,

13 p⌀ = 0.51, vlmin = 50 and m = 20k, arranged according to the business units’ in-sample expected returns................................... 139 Figure 7.9: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k.................................................. 140 Figure 7.10: ...... Limit allocations according to the TA, expected return, uniform and random method using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k, arranged according to the business units’ in-sample expected returns ............................. 141 Figure 7.11: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k............................................... 142 Figure 8.1: Discretization of the model world’s skill levels by trader types ... 149 Figure 8.2: Exemplary p ej -values of nupdates = 20k successive Bayesian updates (zoom on the first 2.5k p ej -values on the right) for business units actually exhibiting p1 = 0.544 (petrol), p2 = 0.521 (violet) and p3 = 0.5 (pink) .......................................... 151 Figure 8.3: Illustration of the influence of formula (8.8) on the distribution of the trader types pk ..................................................................... 152 Figure 8.4: Limit allocations using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k, arranged according to the business units’ out-of-sample (actual) expected returns ......... 154 Figure 8.5: Out-of-sample results for µbank (upper) and the relative advantage of the TA method (lower) for the nupdates-values [1, 2k] using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k .................................................................................. 159 Figure 8.6: Out-of-sample results for confidence level β for the nupdates-values [1, 2k] using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k ........................... 160 Figure 8.7: Illustration of the impact of formula (8.9) on the occurrences θk of the trader types pk on the basis of the types’ distribution ......... 162 Figure 8.8: Limit allocations using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k, arranged according to the business units’ out-of-sample (actual) return expectations ........................................................................ 163 Figure 8.9: Out-of-sample results for µbank (upper) and the relative advantage of the TA method (lower) for the nupdates-values [1, 2k] using

14 roughly ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k ........................................... 166 Figure 8.10: Out-of-sample results for confidence level β for the nupdates-values [1, 2k] using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k .................................................................................. 167 Figure 8.11: Distribution of the market trends q on the basis of a beta PDF with α = 2, β = 2 and on the interval [0, 1] ................................... 171 Figure 8.12: Histogram (upper) and CDF (lower) of actual trend occurrences on the basis of the GBM-returns using m = 20k compared to the CDF of the prior probabilities ψ ......................................... 172 Figure 8.13: Probability structure for the Bayesian learning concerning the market trend .................................................................................. 174 Figure 8.14: Informational cascades on the basis of the development of q ej during the trading of the model bank............................................ 176 Figure 8.15: Limit allocations using ec ≈ 1k, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k, arranged according to the business units’ out-of-sample (actual) return expectations .................................................................................. 178 Figure 8.16: Histograms concerning the shares of long positions for the herding tendencies of 25 % (black) and 0 % (dotted) for the exemplary case of nupdates = 1k ................................................ 180 Figure 8.17: Histograms of the allocation methods’ out-of-sample results concerning plbank for the herding tendencies of 25 % and 0 % (dotted) for the exemplary case of nupdates = 1k...................... 181 Figure 8.18: Out-of-sample results for µbank (upper) and the relative advantage of the TA method (lower) for the nupdates-values [1, 2k] and 25 % herding tendency using ec ≈ 1k, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k ........................... 183 Figure 8.19: Out-of-sample results for confidence level β for the nupdates-values [1, 2k] and 25 % herding tendency using ec ≈ 1000, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k............................................... 184 Figure 8.20: Superiority of the TA method concerning µbank for different herding tendencies across the interval [0, 1] compared to the expected return (blue), the uniform (red) and the random method (orange) .............................................................. 185

15

TABLES Table 2.1: Categorization for the different components of economic capital ............................................................................................. 26 Table 2.2: Cost dimensions of ex post adjustments of total risk through a central treasury department ............................................................ 37 Table 6.1: Different potential parameterizations for nrestarts and nsteps using ncomp = 1,200k, nrounds = 10, trinit = 300 and pτ = 0.75 ........................................................................................ 120 Table 6.2: Probability pλ of the different parameterization’s single restarts........................................................................................... 122 Table 6.3: Probability πλ of the respective parameterization ......................... 122 Table 6.4: Required number of restarts nrestarts and computational resources ncomp of the respective parameterization for πλ = 0.99 ........................................................................................ 123 Table 7.1: In-sample results using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k ................................................................... 129 Table 7.2: Out-of-sample results using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k ................................................................... 131 Table 7.3: In-sample results using ec = 662, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k ................................................................... 133 Table 7.4: Out-of-sample results using ec = 662, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k ................................................. 135 Table 7.5: In-sample results using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k.................................................. 136 Table 7.6: Out-of-sample results using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k ................................ 137 Table 7.7: In-sample results using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k.................................................. 139 Table 7.8: Out-of-sample results using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k ................................ 140 Table 7.9: In-sample results using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k............................................... 142 Table 7.10: Out-of-sample results using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k ........................... 143

16 Table 8.1: In-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k............................................... 156 Table 8.2: Out-of-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k............................................... 157 Table 8.3: In-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k .......................... 164 Table 8.4: Out-of-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k .......................... 165 Table 8.5: In-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 1000, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k............................................... 179 Table 8.6: Out-of-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 1000, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k............................................... 182

17

ALGORITHMS Algorithm 5.1: Threshold accepting.................................................................. 88 Algorithm 5.2: Computation of random and feasible starting solution vl ......... 90 Algorithm 5.3: Computation of a random starting solution inducing maximum use of the available investment capital cbank and economic capital ec .................................................... 92 Algorithm 5.4: Binary search algorithm for adjusting the scaling factor φ....... 93 Algorithm 5.5: Implementation of the neighborhood function N(vlc)............... 96 Algorithm 5.6: Final determination of the transfer value tri ............................. 99 Algorithm 5.7: Monitoring of the budget constraint and the VAR limit of the bank .................................................................................. 100 Algorithm 5.8: Generation of the threshold sequence τ .................................. 101 Algorithm 5.9: Communication between the master and the servants ............ 105

19

1

INTRODUCTION

1.1 Problem and research question The financial crisis revealed, among other things, in particular one shortcoming of the financial system: Banks do not provide enough liable equity in order to cover unexpected losses and guarantee their going concern with sufficient certainty. As a consequence, the Basel Committee of Banking Supervision introduced a new accord, known under the term Basel III.1 The accord’s main concern refers to the increase of the liable equity of banks and the improvement of the equity’s quality. The guidelines demand the capital adequacy ratio (CAR) of the banks to increase especially for systemically important banks. The CAR measures the ratio of liable equity to risk weighted assets (RWA). The increase of the liable equity causes a considerable cost pressure on the financial institutions, the quality improvements not to mention. In order to keep their profit margins, the institutions will be increasingly forced to strictly manage their institution-wide businesses according to risk-return aspects in a portfolio theoretical sense. This, however, demands for a comprehensive ex-ante-equity management. The present approach of research describes such a system of corporate management on the basis of a model. The model exclusively takes into account the available equity of a bank and completely dispenses with funding issues. In order to be able to extensively examine the portfolio theoretical aspects of this system of corporate management, the model chooses the economic instead of the regulatory perspective. As a consequence, the model focusses on the economic equivalent of liable equity in the form of economic capital.2 The economic perspective assesses risk by downside-risk measures instead of using the RWAmethodology.3 The present corporate management system allocates the risk-bearing potential of the economic capital to the business units by value at risk (VAR) limits. This enables the units to take risks and operate business according to their respective limit’s extent. The transmission of the business strategy from the central management to the decentralized decision makers4 therefore manifests by the strate-

1

2 3 4

See Basel Committee of Banking Supervision (2011) for the Basel III accord and Deutsche Bundesbank (2011) or Auer and Pfoestl (2011) for overviews concerning the Basel III guidelines. See chapter 2.1 for details on the definition of economic capital. See chapter 2.2 for an introduction of the downside risk measures VAR and expected shortfall (ES). The following uses the expressions “decision maker” and “business unit” synonymously.

20 gic setting of the limits. The VAR limits finally represent decision rights determining the range for the autonomous decision making by the business units. Nevertheless, this kind of corporate management bears conflicting objectives.5 For the consideration of portfolio theoretical aspects, the central management requires precise information concerning the correlations between the returns of the business units’ business opportunities. The use of delegation advantages, however, depends on the autonomous decision making of the business units which are free to choose long or short positions. Unfortunately, this autonomous decision making causes unstable correlations between the units’ businesses which significantly complicates an optimal corporate management in the portfolio theoretical sense. Further difficulties represent the portfolio theoretical consideration of the decision makers’ individual prospects of success and their interactions6 potentially influencing their decision making. The differentiated modeling of such a corporate management system is technically demanding. The consideration of the decision makers’ individual prospects of success causes the business units’ returns to follow non-elliptical distributions. This fact excludes the use of common analytical optimization to achieve the most advantageous limit allocations. The non-elliptical distributions turn the underlying optimization problem into a global problem requiring heuristic optimization for proper solving. The present model of optimal allocation of economic capital uses the threshold accepting (TA) algorithm for heuristic optimization. However, compared to the relevant literature, the present implementation of the TA-algorithm requires certain modifications.7 The central research question of the present approach concerns whether the optimal allocation of economic capital according to portfolio theoretical aspects represents the superior corporate management approach. The research approach addresses this question under the model assumption of strictly rational behaving players. The central research question subdivides into several partial objectives. The first step consists in developing a consistent model reflecting the situation of corporate management by economic capital allocation sufficiently detailed. From the modeling of the economical processes also different technical challenges arise. 5

6

7

See Froot and Stein (1998) identifying these conflicting objectives in their final conclusions and emphasizing the need for further research on this conflict’s proper consideration by integral bank management approaches in the portfolio theoretical sense. See Burghof and Sinha (2005) modeling this interaction in the form of herd behavior and identifying correlations between decisions as the main risk drivers. Their analyses, however, dispense with the optimization of the VAR limit system in a portfolio theoretical sense. See Gilli et al. (2006) applying and developing TA in context with portfolio optimization under downside risk constraints. Besides the high relevance of their approach for the present implementation of TA, there are significant differences compared to the present optimization of VAR limit systems.

21 Technical challenges arise from the fact that the present approach addresses the optimization of VAR limits instead of common portfolio optimization under a VAR constraint.8 As a consequence, the extents of the decision variables in the form of the limits do not depend on a known budget constraint but on unknown diversification effects. This requires modifications of the implementation of TA compared to the case of common portfolio optimization under a VAR constraint. The finished model then allows addressing the central concern of the present research on the basis of different model analyses. These analyses successively adjust the setting for the optimal allocation of economic capital by imposing more restrictive and realistic conditions. A central adjustment refers to the replacement of an informed central management by an uninformed one. In contrast to the informed management, the uninformed one has to acquire the relevant information concerning the business units by rational learning. The model case enables analyzing whether an optimal allocation of economic capital is still relevant under the more realistic scenario of increased uncertainty arising from less precise information. A second central adjustment compared to the initial model case concerns the independent and autonomous decision making of the business units. Therefore, the independent decision making undergoes restrictions by the introduction of herd behavior. Herd behavior manifests in the present model by decision makers imitating investment decisions observed from their colleagues as soon as following their individual information appears less promising from a rational perspective. This influences the correlations between the investment decisions and induces additional uncertainty. The model case aims at analyzing whether corporate management on the basis of VAR limit systems fundamentally accomplishes anticipating herd behavior of the decision makers. The present modeling and analysis of optimal corporate management by VAR limit systems generally aims at disclosing the requirements of such a corporate management approach.

1.2 Organization of the research The research subdivides into nine chapters. After this introduction two further preparatory chapters describe the basic issues of economic capital allocation and the implications of the literature of related fields of research. Subsequently, three chapters address the modeling and implementation of the relevant econom8

See Burghof and Müller (2012) for insights on the differences between common portfolio optimization under a VAR constraint and the optimization of VAR limit systems in context with the use of TA.

22 ical and technical processes. Two chapters with analyses follow while an additional last chapter concludes. After the introduction the second chapter at first provides the basics knowledge concerning the corporate management by economic capital allocation. Therefore, the chapter addresses the composition of economic capital, the determination of the required economic capital by downside-risk measurement and the allocation of economic capital on the basis of downside-risk limit systems, here VAR limit systems. Furthermore, the second chapter deals with the relation between economic capital allocation and risk adjusted performance measurement (RAPM). Finally, the chapter describes the allocation of economic capital as a situation of delegation by decision rights. The third chapter examines related fields of research. At first, the chapter distinguishes different situations of economic capital allocation known from literature in order to prevent confusion. Thereafter, the chapter describes these different situations in depth on the basis of their relevant literature. These descriptions reveal the respective fields’ implications for the current research. The related fields include risk contribution issues, axiomatic approaches to risk assessment and economic capital allocation, dynamic risk measures, economic capital allocation as a means of corporate management and portfolio optimization under a downside risk constraint. After introducing the related fields of research, chapter 4 determines the basic model. Thereto, the chapter at first provides a qualitative description of the model and then introduces the underlying stochastic program. The subsequent explanations focus on the valuation of the stochastic program’s objective function. Most important elements in this context represent the simulation of the stock returns, of the business units’ profits and losses and the business units’ heterogeneous prospects of success. Finally, the chapter explains the model’s consideration of backtesting issues on the basis of out-of-sample computations and also the model’s requirements for importance sampling (IS). While chapter 4 exclusively determines the model’s underlying stochastic program, chapter 5 addresses the solving of the program on the basis of TA. First of all the chapter provides a visual proof of the potentially resulting solution spaces being non-convex confirming the problem being global. The subsequent parts explain the basic algorithm of TA, its determination of start solutions and its neighborhood function. The neighborhood function determines how the TAalgorithm creates interim solutions. This part of the TA-algorithm contains certain modifications compared to the relevant literature ensuring the algorithm to solve properly for the current case of VAR limit optimization. A considerable part of these modifications address the generation of the so called transfer val-

23 ues.9 The rest of the chapter provides the description of the generation of the threshold sequence10 and an outline of the present TA-application’s use of parallel computing. The proper implementation of TA alone still does not guarantee high quality solutions concerning the respective optimization problem. High quality solutions additionally require a proper parameterization of the TA-algorithm which describes chapter 6 in depth. The chapter starts by introducing the concept of successive parameterization. This concept first of all identifies pairs of parameters urgently requiring simultaneous parameterization. Subsequently, the chapter examines the parameterization of each pair of parameters in detail. The first pair consists of the initial values of the threshold sequence and the sequence of transfer values. This part also provides an excurses on the benefits of the present TA implementation’s transfer value-methodology which might be also useful beyond the present approach. The second pair of parameters consists of the number of restarts and the number of optimization steps executed by the TAalgorithm. The concluding remarks provide information on the current research approach’s handling of the re-parameterization of the TA-algorithm for variations in the model settings. Chapter 7 provides analyses concerning the superiority of optimal economic capital allocation on the basis of the basic model. The analyses assume optimal conditions in the form of an informed central planner11. The very first part introduces fundamental alternative schemes of economic capital allocation compared to the optimal allocation scheme. One of these schemes strictly orientates by the business units’ expected returns while the others pursue a uniform and a random allocation. Subsequently, the schemes enter a benchmark study comparing their performances. During the study, the conditions successively undergo certain adjustments in order to test the performances on a wider scope. An important adjustment, for example, enforces the two most promising allocation schemes to exactly use the same amounts of the available resources in order to set a perfect level playing field. Further adjustments additionally impose minimum limits preventing unpractical and unrealistic small limits for less successive business units, the general reduction of the business units’ prospects of success and a higher degree of diversification. The very last part concludes on the relevance of optimal allocation of economic capital in case of an informed central planner. Further analyses of chapter 8 replace the informed central planner by an uninformed central planner acquiring information by rational learning. After an in9 10

11

The transfer values determine the extent of modification of the interim solutions during the search process. The thresholds determine the temporary acceptance of worsenings of interim solutions during the search process. The following uses the expressions “central planner” and “central management” synonymously.

24 troduction to the situation of an uninformed central planner, the chapter describes the modeling of the learning processes representing a Bayesian updating algorithm. The model extension by the Bayesian learning central planner enables analyses on the basis of less precise information. This less precise information causes a higher degree of uncertainty. The analyses consist of two benchmark studies on the different economic capital allocation methods: One uses perfect and one uses conservative prior probabilities for the Bayesian updating. This gives an impression of the studies’ results’ robustness concerning the use of different prior probabilities. A further extension of the model keeps the uninformed central planner and additionally expands the behavior of the decision makers by herd behavior. As a consequence, the decision makers no longer meet investment decisions exclusively according to their individual skills and information. Instead, they additionally orientate by the decision making of their colleagues which causes further uncertainty. After the introduction to herd behavior the chapter gives a detailed description of the implementation of herding within the model. Subsequently, another benchmark study investigates on the performance of the optimal allocation scheme under the consideration of different herding tendencies. Finally, the chapter concludes on whether the optimal allocation scheme sufficiently anticipates the increase of uncertainty arising from the less precise information and the additional consideration of herding. Chapter 9 provides, besides a summary of the results, closing remarks on the model’s central assumptions and design and in this context suggests potential future research.

25

2

CORPORATE MANAGEMENT BY ECONOMIC CAPITAL ALLOCATION

2.1 Properties of economic capital Liable bank equity and its scarceness have become key issues of modern bank management.12 The introduction of regulatory capital by the end of the eighties stopped the general tendency to reduce the equity ratios.13 Today banks are forced by regulatory standards to hold an equity minimum depending on their total exposure to risk. Some institutes even position themselves by featuring significantly higher equity ratios. However, all institutes have to face equity being more expensive and scarce than other funding. Today, mainly the institution’s available equity determines the scope of an institution’s business volume.14 As with every scarce resource also in case of liable equity the question arises of how to allocate the resource properly and therefore which allocation among the business fields and units maximizes the shareholder value. Instead of the funding properties in particular the equity’s properties of absorbing losses and preventing possible bankruptcy represent the relevant properties in the present context.15 However, assigning particular equity tranches to particular businesses seems rather arbitrary since an institution’s joint liability arises from the company as a whole. The joint liability and the amount of equity of a bank reflect the risk-bearing potential of the bank. In this context, an interesting question concerns the contribution of the single businesses of a bank to the total exposure to risk. The business units consume the bank’s risk-bearing potential and liable equity respectively according to their risk contributions. The resulting allocation problem significantly differs from common budgeting problems. Not only balance sheet equity absorbs losses or necessarily represents the most efficient means for loss absorption. As a consequence, the balance sheet equity does not represent the reference value for the equity allocation but rather the so called economic capital or, under regulatory modification, the liable equity. The 12 13

14

15

Chapter 2 is based on Burghof and Müller (2007). For issues on regulatory capital before and from 1988 see Croughy, Galai and Mark (1999), p. 1 et seqq, Best (1998), p. 180 et seqq. A comparison of fundamental variants of regulatory capital requirements provides Dimson and Marsh (1995). See Dowd (1998), p. 209 and p. 217, chapter 11.

26 regulatory modifications take into account that third persons can hardly verify several equity positions which therefore do not satisfy regulatory demands.16 However, the present research approach completely disregards the regulatory perspective in favour of the economic perspective. As a consequence, the following table 2.1 restricts itself to a categorization of the different potential components of economic capital.17 Table 2.1: Categorization for the different components of economic capital Categories

Sources of Economic Capital

primary

excess profit

secondary

undisclosed reserves

tertiary

minimum profit, special reserves for general banking risks

qaternary

disclosed reserves, subscribed capital

quintary

supplementary- and subordinate capital (undisclosed reserves excluded)

The consumption of primary and secondary economic capital through loss absorption might be possible without any awareness of external financiers. In contrast, the liquidation of balance sheet positions or even subordinated capital certainly causes tremendous distortions between the bank and its investors. Therefore, it seems natural that the bank management would counter losses with economic capital according to the outlined categorization and prioritization.

2.2 Required economic capital by downside risk measurement The term “downside” indicates the exclusive consideration of losses. In contrast, a risk measure considering both sides of a profit and loss distribution represents the variance and standard deviation respectively. The following introduces the two most prominent downside risk measures value at risk (VAR) and expected shortfall (ES) 18.19 The VAR represents a loss ℓ in the form of a currency value

16 17 18

19

See Burghof and Rudolph (1996), p. 124 et seqq. See Lister (1997), p. 190 et seqq and Schierenbeck and Lister (2003), p. 15. Synonymous expressions for expected shortfall (ES) are for example conditional value at risk (CVAR), average value at risk (AVAR), and expected tail loss (ETL). The following introduction of VAR and ES refers to Jorion (2007) providing detailed information on VAR, ES and downside risk measurement at all.

27 which is not exceeded with a certain probability 1-α during a certain holding period H.20 Therefore, the VAR represents the α-quantile of a profit and loss distribution F. (2.1)

VARα = F −1 (α )

(2.2)

where α = p (l > VARα ) .21

Finally, the VAR depends on the chosen confidence level 1-α, the holding period H and the point in time t the risk assessment takes place. Typical confidence levels represent 0.95 or 0.99 where typical holding periods are 1 day, 10 days or 1 year. According to these parameters, the VAR therefore at the same time determines the required amount of economic capital concerning e.g. a particular portfolio compensating the occurring losses in 99 % of all cases during one business year. The methods for VAR calculation subdivide in parametric and nonparametric methods.22 The most common method from the first category represents the variance-covariance approach also called delta-normal approach.23 The approach assumes normally distributed profits and losses by default. This assumption in combination with the measured variances and covariances of the financial instruments from the past build the basis of the variance-covariance approach. An important method from the second category represents the historical method which does not require any assumption concerning the distribution of profits and losses at all.24 The distribution consists of actual profits and losses experienced during the past. A typical problem in context with the historical method arises from insufficiently low sample sizes. Here the combination with the method of bootstrapping provides an effective remedy. Another very common method from the category of nonparametric methods is the Monte Carlo approach.25 The Monte Carlo approach can process any kinds of profit and loss distributions due to its massive application of simulation. The profits and losses here stem from Monte Carlo simulations which can generally produce any kind of distribution. If the Monte Carlo method processes realistic distributions, the approach calculates VAR very precisely. However, compared

20 21 22 23 24 25

See e.g. Jorion (2007), pp. 105-108 for a description and definition of VAR. See e.g. Gilli et al. (2006), p. 4 describing VAR on the basis of these terms. See e.g. Jorion (2007), pp. 108-113 for remarks on parametric and nonparametric VAR computation. See e.g. Jorion (2007), pp. 260-262 explaining the delta-normal approach. See e.g. Jorion (2007), pp. 265-268 explaining the historical method. See e.g. Jorion (2007), pp. 265-268 describing the Monte Carlo approach.

28 to the variance-covariance approach, the method requires considerable computational resources. Regardless of the exact computational method, the VAR suffers from the crucial shortcoming of being not coherent.26 The coherency framework was developed in order to formulate universal requirements for consistent risk measures. For example the VAR does not generally satisfy subadditivity which represents one out of four coherency axioms. As a result, under the use of VAR diversification not necessarily reduces risk if profits and losses follow a non-elliptical distribution. Though not coherent itself, VAR provides the fundament for the calculation of the coherent downside risk measure expected shortfall (ES).27 (2.3)

ESα = E (l l > VAR α ) .28

Consequently, ES represents the expected loss for losses exceeding VARα. Therefore, the ES finally provides more information than VAR. However, also the quality of ES depends on the quality of the input parameters used by the respective calculation method. Despite its shortcomings, VAR still represents the most important method in banking practice. The supremacy of VAR results from its central role for issues in banking supervision and regulatory frameworks. As a consequence of this supremacy, also the present research approach mainly uses VAR. Nevertheless, the later analyses always additionally calculate ES. This enables a further check on the plausibility of the later computations’ outcomes by comparing both measures with each other. Checks revealing e.g. VAR > ES indicate computational errors.

2.3 Corporate management by bank-wide VAR limit systems The determination of economic capital requirements by VAR finally enables the corporate management by VAR limit systems. Thereto, a system of VAR limits assigns the risk-bearing potential of a bank to its business units. This process finally represents the bank-wide allocation of economic capital. The limit system then allows controlling the business units by comparing their limits with their

26

27 28

See Artzner et al. (1999) for VAR and coherency issues. See also e.g. Jorion (2007), pp. 113-115 for remarks on the shortcomings of VAR and the benefits of expected shortfall (ES). See Artzner (1997) for issues on ES and the superiority of ES compared to VAR. See e.g. Gilli et al. (2006), p. 5 describing ES on the basis of this term.

29 actual capital requirements on the basis of regular VAR calculation. In practice, however, the determination of the institution-wide risk by VAR involves significant difficulties. In some risk categories, for example, the determination of capital requirements follows questionable heuristics.29 The present research approach, however, does not focus on these difficulties but assumes a frictionless corporate management on the basis of VAR limit systems instead. The VAR limits can refer to e.g. the whole bank, business units, portfolios, positions or single decision makers. They can also refer to risk categories as e.g. credit, interest rate or foreign exchange risk. No matter which reference is chosen, in order to allow for an effective corporate management, limits always have to be addressed to decision makers. Therefore, limit addressees can be e.g. traders, leaders of business units or leaders of whole business lines. Further appropriate limit addressees represent key risk managers responsible for particular risk categories and the respective bank-wide risk aggregation. The qualitative standards for market risk management from the Basel Committee of Banking Supervision suggest two dimensions the structures of VAR limit systems could orientate by: 30 On the one side, the limit structure should reflect the hierarchy of the institution’s business units. Such a structure also provides subsidiary decision makers with shares of economic capital ensuring the bankwide profit maximization through the effective use of the scarce resource. On the other side, the structure should reflect the bank-wide risk categories enabling the identification and control of undesired risk concentrations. The reduction of such risk concentrations should then be undertaken by a central risk management not being subject to any profit maximization objectives. In context with the present research approach, however, this second dimension remains disregarded for reasons of simplification. The first dimension finds consideration on the basis of a model bank featuring two hierarchy levels in the form of the central management and the business units. The implementation of a bank-wide corporate management system on the basis of VAR at first requires choosing the confidence level and the holding period for VAR computations.31 In this context, the banking supervision provides a further parameter in the form of a global factor for assessing capital requirements which lies between 3 und 4 depending on the used risk model’s sophistication.32 However, beyond the regulatory implementations the economic implementation of the VAR model should avoid such arbitrary parameterization. A relaxed pa-

29 30 31

32

See e.g. methods for assessing operational risk from the Basel Committee on Banking Supervision (2003). See Basel Committee of Banking Supervision (1996a), p. 39 et seqq. See Böve, Hubensack and Pfingsten (2006), p. 17 et seqq, for an overview concerning the assessment of total risk in banking. See Basel Committee on Banking Supervision (1996b), p. 8 and Jorion (2001), p. 137.

30 rameterization can considerably increase the risk-bearing potential of an institution. The parameterization finally depends on the risk appetite the owners of the bank want to implement by the help of the central management and by the risk politics of an institution.33 Furthermore, the risk appetite manifests in the applied methodology of considering correlations between the risks and risk drivers respectively. Since the returns of the different limit addressees’ investments do not completely correlate with each other, diversification effects arise among the economic capital limits. A simple adding of subsidiary units’ capital requirements implicates correlation coefficients of one and is therefore excessively cautious and finally wrong.34 Hence, the sum of the subsidiary limits should exceed the institution’s totally available economic capital. This applies for every hierarchy level of a limit system, as outlined below.

Business branch VAR limit: € 20 m Division A VAR limit: € 10 m

Team A1 VAR limit: €8m

Team A1 VAR limit: €7m

Division B VAR limit: € 12 m

Team C1 VAR limit: €6m

Division C VAR limit: € 8 m

Team C1 VAR limit: €5m

Team C1 VAR limit: €3m

Figure 2.1: Extract of an exemplary VAR limit system However, theory and practice do not agree of how to implement such limit structures in detail. An open question concerns the methodology of how to allocate the limits among the different business units. Therefore, the present research approach investigates on the optimal allocation of economic capital on the basis of two hierarchy levels. Furthermore, the relevant literature and practice does not yet provide established solutions concerning the limits’ determination across many hierarchy levels. Superficially, the correlations between the financial instruments’ returns seem to represent the key factors in context with these issues. In context with VAR-limits in the sense of decision rights, however, the correlations between the limit addressees’ business decisions drive risk and diversification.

33 34

See Best (1998), p. 142 et seqq. See Burghof (1998), p. 163 et seqq.

31

2.4 Economic capital allocation on the basis of risk adjusted performance measurement 2.4.1 Introduction to risk adjusted Performance measures Every optimization needs a target value. In context with the optimal allocation of economic capital such a value is obviously given by the relation of return to economic capital. Thereby, the target value represents a so called risk adjusted performance measures (RAPM).35 The risk adjustment arises from either explicitly considering the equity costs caused by the risk taking (RAROC, RARORAC) and/or through using the amount of economic capital itself as a reference value for the performance measurement (RORAC, RARORAC).36 All three measures (RAROC, RORAC and RARORAC) represent at least potential target values for the allocation of economic capital. Only RAROC uses the invested capital and therefore fundamentally differs from the alternative RAPMs. (2.4)

RAROC =

P − r ⋅ VAR . C

The RAROC consists of the net profit P, the cost factor or hurdle rate r and the economic capital VAR. This RORAC here implicitly assumes economic capital limits in the form of VAR limits. Alternatively, also other downside risk measures can be used which express risk by monetary values. The denominator represents the invested capital C. The use of C in the denominator rather qualifies the RAROC to assess the efficient use of the invested capital instead of the economic capital. A much more appropriate target value for the optimal economic capital allocation of the resent research approach represents the maximization of the RORAC in the form of (2.5)

RORAC =

P . VAR

The RORAC provides the most clear cut relation between the profit and the applied economic capital. The present approach of optimal economic capital allocation exclusively applies RORAC.

35

36

See e.g. Wilson (1992), Zaik et al. (1996), Croughy, Turnbull and Wakeman (1999) and Jorion (2001), p. 387 et seqq. The abbreviations stand for risk-adjusted return on capital (RAROC), return on risk-adjusted capital (RORAC) and risk-adjusted return on risk-adjusted capital (RARORAC). The terminology of the relevant literature can be confusing. The present work follows the terminology of Croughy, Turnbull and Wakeman (1999), p. 6. See also Matten (1996), (2000), p. 147 and Erasmus and Morrison.

32 The RORAC-concept finds an important modification in the RARORAC where the costs of the applied economic capital again reduce the net profit similar to the case of the RAROC. In this case, the hurdle rate r becomes the benchmark for the success of a business unit according to (2.6)

RARORAC =

P − r ⋅ VAR = RORAC − r . VAR

However, the application of hurdle rates and even more the determination of such hurdle rates are controversial. 2.4.2 Controversial benchmarking on the basis of hurdle rates Compared to the RORAC the RARORAC additionally requires the proper determination of hurdle rates. On the one side a financial institution’s joint liability across all its business units’ demands for an equal cost factor for economic capital. A proper risk mapping provided, the risk of a business unit already experiences a sufficient consideration by the resulting capital requirement and does not need to be additionally emphasized by a customized hurdle rate. On the other side the relevant literature partly disagrees with this argumentation.37 These approaches quote that different business areas exhibit different return expectations and therefore require different hurdle rates. However, there is another reason against the usage of RARORAC in context with the allocation of VAR limits. This reason arises from the fact that hierarchical limit structures provide the subsidiary organizational units altogether with more economic capital than actually available due to diversification effects.38 As a result, the subsidiary organizational units suffer from systematically excessive charges if they undergo a rating on the basis of hurdle rates in the form of the actual cost factors. Under such an excessive benchmarking in particular conservative business strategies will inevitably underperform. This increases the pressure on the responsible management resulting in high risk strategies in order to achieve the targets of the performance measurement. In contrast, for the assumption of an equal hurdle rate across the whole bank an excessive benchmarking of subsidiary units can be easily prevented by the simple downscaling of the hurdle rate ri of business unit i. In case of a bank consisting of two hierarchy levels and n business units the hurdle rate ri follows from (2.7)

ri = rbank

VARbank n

∑VAR

.

i

i =1

37 38

See Wilson (2003), p. 79 et seqq. Figure 2.1 gives an exemplary illustration of this fact.

33 Finally, if the limit allocation orientates by the RARORAC, the hurdle rate also represents an input parameter. The limit allocation and the determination of the hurdle rate thereby become interdependent problems. As a result, the question arises whether the setting of a benchmark generates any surplus value for the allocation of economic capital limits since this allocation not necessarily aims at identifying underperforming business units that should be closed down. The considerable difficulties in context with the use of hurdle rates suggest the usage of the RORAC as the relevant target value for the allocation of economic capital limits. However, there has to be emphasized that RORACs from different hierarchy levels are not allowed to be compared with each other. For that purpose, an ex-post consideration on the basis of a differentiated RARORACSystem with scaled hurdle rates appears appropriate. 2.4.3 Implications of limit addressees in the form of decision makers In context with economic capital allocation as a means of corporate management the limit addressees naturally represent decision makers instead of mere financial instruments. This has at least two implications for the performance measurement on the basis of the RAPM-framework. First, the expected returns to be considered for performance measurement are no longer those of financial instruments but those of decision makers and their decisions respectively. As a consequence, also the correlations between the financial instruments’ returns are no longer relevant and become replaced by the correlations between the decisions’ returns. Second, for an efficient use of the scarce economic capital, the decision makers’ performances have to be measured with respect to their assigned capital instead of their actually used capital. The allocation of economic capital limits naturally takes place in advance of the actual position taking by the limit addressees. Therefore, the net profit P represents the expected profit of the business units’ acting and certainly not only the expected profit of a constant portfolio of financial instruments. As a result, profit expectations no longer depend exclusively on the financial instruments’ returns. Anyhow, also the profit expectations of business units could be estimated on the basis of historical data or differently generated data. In case of historical data, the data has to undergo a weighting according to the currently allowed business volume of a business unit. However, so far, there are no specific methodologies for the realization of such processes. A corresponding methodology ought to consider fixed cost effects and economies of scale and address market restrictions by the earnings side. For a trading department, for example, the question is important up to which business volume a trader operates as a price taker and when the potential profits start declining through the reaction of the market prices. Given the complexity of these issues and their dependence on particular market situations it appears reasonable to at first use as simple assumptions as

34 possible. Therefore, the present research approach assumes the decision makers to be pure price takers. Furthermore, the situation of the ex-ante allocation of economic capital among the decision makers implies the RORAC’s denominator referring to the assigned economic capital instead of the actually used economic capital. These differences between assigned and used economic capital vanish for the assumption of full limit usage by default. Many models from the field of economic capital allocation use such an assumption including the model of the present research approach.39 In fact, however, in reality parts of the limits often remain unused. This can be due to cautious decision makers or the lack in promising investment possibilities. However, in particular in the trading branch the limit addressees should be obliged to form an opinion concerning the market trend and to fully use their limits accordingly. As a consequence, assuming full limit usage by default for the modeling in context with VAR limit systems appears appropriate. Note that the assumption of full limit usage exclusively refers to the traders’ individual limits and not to the bank’s total limit.

2.5 Economic capital allocation as a situation of delegation by decision rights 2.5.1 Implications for the risk management process The VAR literature cannot deny its origin from the neoclassical world of capital market theory. The literature mainly considers portfolios of securities held by a certain business unit.40 The total risk of these portfolios results from the volume of their single positions and the respective return distributions. Through the use of rather restrictive assumptions concerning the return distributions, the expected returns, variances and correlations of the securities become the crucial input parameters in order to assess the portfolio VAR. In case of well diversified portfolios particularly the correlations play a key role. In context with market risk, for example, these input factors are based on historical market data. However, as already mentioned in the previous chapter, the situation in case of VAR limit setting is different: The assignment of VAR limits in context with corporate management naturally involves the situation of delegation by decision rights.41 In this case, the VAR limit provides a decision maker with the right to

39 40

41

See e.g. Dresel, Härtl and Johanning (2002), Burghof and Sinha (2005) and Burghof and Müller (2009). See e.g. Basak and Shapiro (2001), Campbell, Huisman and Koedijk (2001), Alexander and Baptista (2002), Wang et al. (2003) and Yiu (2004). See Laux and Liermann (2005) for a general outline concerning delegation issues.

35 autonomously take risky positions up to a certain extent. Hence, this refers to the situation ex ante to the composition of the portfolio. If the delegating management would already know the resulting risky portfolio at the time of the limit setting, the delegation by decision rights and therefore the VAR limit system would be needless. The delegation by decision rights aims at using the specific skills and individual information of an employee or a group of employees. In context with e.g. trading decisions certainly the skills to generate and use information advantages compared to the market is in the focus. This cannot be performed by the central management for capacity reasons and reasons of insufficient execution speed. The central management therefore has to leave it to traders to build trading positions and can only intervene afterwards through own trades or by instructing the treasury department. Hence, as outlined in Figure 2.2, the limit allocation therefore represents the first step in a dynamic process of risk management. Central

Decentralized

Central

Central

Economic capital allocation

Risk taking

Risk assessment

Risk adjustment

(Limit system, delegation by decision rights)

(Decentralized genesis of initial portfolio)

(VAR calculation of initial portfolio)

(Hedging or additional risk taking)

Utilization of information advantages of decentralized decision makers throughout the institution

Maximization of chosen RAPM (e.g. RORAC)

Return generation

Backtesting

Figure 2.2: Economic capital allocation and risk management process The limit allocation and the subsequent steps in the form of the decentralized risk taking, the risk assessment and the adjustment of the total risk by a central treasury department result in a feedback. This feedback at first concerns the quality of the applied risk models. Distinct deviations between a model’s outcomes and the realizations indicate certain weaknesses of the model. Internal models of capital adequacy of the banking supervision use this feedback for in-

36 centive purposes. As a consequence, the capital requirements and capital costs respectively increase for confidence level violations.42 The present model of optimal economic capital allocation, however, reduces the process of economic capital allocation and risk management. On the one side, the model’s static setting prevents the consideration of risk adjustment issues. On the other side, the present model of optimal allocation represents a strict ex ante approach. This approach aims at completely anticipating the potential trading scenarios resulting from the use of the decision rights through the decision makers. Thereby, adjustment and hedging costs can be minimized. In reality, however, the frequencies of allocations and reallocations of the economic capital are rather low. This fact naturally restricts the possibilities of comprehensively anticipating the future in order to minimize adjustment and hedging costs. 2.5.2 Costs of delegation by decision rights The afterward adjustments of the portfolio risk could remain completely disregarded if banks would provide amounts of available economic capital which even compensate the most extreme losses. However, this would eliminate any diversification effect illustrated by Figure 2.1. Instead, the banks would then undertake an excessive equity provision. In contrast, a less conservative limit setting considering potential diversification effects which arise on the level of the business units makes ex post adjustments unavoidable. Accidentally occurring and existence-threatening risk concentrations require immediate adjustments. The extent of the decision makers’ limits thereby becomes a function of the costs resulting from these adjustments. The exact quantification of these adjustment costs appears unfeasible and therefore less promising. However, table 2.2 identifies potential dimensions of these costs.

42

See Basel Committee on Banking Supervision (1996b), Croughy, Galai and Mark (1999), pp. 15-16 and Jorion (2001), p. 129 et seqq.

37 Table 2.2: Cost dimensions of ex post adjustments of total risk through a central treasury department Cost dimensions of ex post adjustments of total risk 1. Transaction costs in the narrow sense 2. Costs of acting without information advantages 3. Costs of time delay 4. Costs of omitting adjustments in case of prohibitively high adjustment costs 5. Costs of underutilization of allocated economic capital

1. Transaction costs in the narrow sense: These costs directly depend on the limits’ extent and the frequency the limits undergo adjustments. Therefore, these costs are congruent with common provisions and transaction costs known from the proprietary trading of banks. In case of central adjustments, however, the use of e.g. index derivatives could save costs compared to the trading of the single stocks. As long as the adjustments take place on a certain meta-level, the resulting transaction costs should stay manageable. 2. Costs of acting without information advantages: Whether traders can obtain information advantages compared to the market might be questionable in the individual case. However, the global objectives of a treasury department foreclose to a great extent the integration of very specific information into the department’s trading decisions. The costs arising from such uninformed decision making by the treasury depends on the degree of information efficiency of the underlying market.43 Thereby, only in case of strong information efficiency the cost neutrality of the uninformed decision making can be expected while each reduction in information efficiency increases the costs of uninformed decision making. As a consequence, the limit setting should be rather generous for markets featuring strong information efficiency and rather tight for markets featuring weak information efficiency in order to prevent extensive adjustment costs. The costs of uninformed decision making can also be reduced by using globally effective financial instruments in the form of index derivatives, sufficient hedging effects provided. Globally effective instruments minimize the interventions concerning particular trading decisions of the decision makers. Instead, direct interventions could induce the traders aiming at anticipating the interventions which potentially causes further cost increases.

43

See Fama (1970).

38 3. Costs of time delay: Costs of uninformed trading by the treasury are argued to be generally low in consequence of the time delay the treasury commonly takes action. Until the hedging effect occurs, the information advantages of the traders already dissolve and are reflected by the respective prices. This argumentation, however, just hides another important cost dimension: As a consequence of the time delay substantial losses can accrue. The limits’ extent should therefore also depend on the respective market’s probability of short-time price movements. 4. Costs of omitting adjustments in case of prohibitively high adjustment costs: In particular in the trading branch positions can be closed or hedged rather easy. However, for less liquid markets the neutralization of risky positions can be difficult. In the worst case, a position has to be kept since adjustments would cause prohibitively high costs. The respective economic capital inevitably remains unavailable for more efficient investments for a longer period. As a consequence, the limits concerning the business activities on illiquid markets should be rather tight. 5. Costs of underutilization of allocated economic capital: A too defensive limit setting leads to an unsatisfying utilization of the potential information advantages. This cost dimension provides the trade-off in context with all the so far mentioned aspects suggesting a rather tight limit setting. Figure 2.3 illustrates the argumentation on the cost dimensions by outlining the adjustment cost function of inappropriate limit setting.

adjustment costs C

target risk (VAR)

risk of the initial portfolio (VAR)

Figure 2.3: Qualitative outline of the adjustment cost function

39 The corresponding cost minimization problem can be described by a simplified formula according to VAR

(2.8)

E(C ) =

∫ C (VAR target − VAR)dF (VAR) = min!.

VAR

The term F(VAR) represents the distribution of the different portfolios’ VARs resulting from a particular limit allocation. As a consequence, the allocation of VAR limits requires, besides the consideration of the adjustment costs, also a concept concerning the behavior of the decision makers in order to derive F(VAR). This concept heavily contrasts with the neoclassical literature of the quantitative risk management. The neoclassical literature focusses on market data and completely disregards influences of decision makers. This consideration implicitly assumes randomly composed portfolios. As a consequence, the delegation by decision rights would be needless. However, obviously the central managements of banks adhere to the idea of corporate management by limit setting and also adhere to the idea of delegating business decisions to subsidiary decision makers and portfolio managers respectively. These facts suggest positive effects of portfolio management by decision makers instead of random number generators. Finally, economic capital allocation as a means of corporate management inevitably requires the detailed consideration of the decision makers’ influences. The present research approach does not focus on the different cost dimensions of delegating by decision rights. Instead, the present model replaces the costminimizing perspective by a strict return-maximizing one. This allows the comprehensive consideration of portfolio theoretical aspects. Limit allocations which induce maximum expected returns are assumed to successfully minimize any of the cost dimensions by default. Besides the strictly expected returnmaximizing perspective, the model aims at the comprehensive consideration of the decision makers influences. Therefore, the model integrates the decision makers’ prospects of success into the determination of their decision rights in order to optimally anticipate the future. Furthermore, the model integrates the decision makers’ tendency to follow market trends and the trading decisions of colleagues instead of relying on their individual skills and information. As a consequence, the decision rights have to anticipate a higher level of uncertainly arising from the more differentiated behavior of the decision makers. Furthermore, this behavior of the decision makers demands the decision rights to anticipate the occurrence of much less diversified portfolios.

41

3

IMPLICATIONS OF RELATED FIELDS OF RESEARCH

3.1 Different situations of economic capital allocation The financial literature provides a broad spectrum of different issues all finally involving situations of economic capital allocation. The current chapter gives an incomplete overview concerning these issues. Besides the description of the respective fields of research the following chapters discuss the different fields’ implications for the current approach. Furthermore, the categorization of the wider field of economic capital allocation issues helps defining the focus of the present model. A particular situation of economic capital allocation represents the calculation of risk contribution. Starting point for risk contribution considerations always represents a given portfolio. Assessing the risk contributions of the portfolio’s single components reveals these components’ consumptions of economic capital which finally represents a form of economic capital allocation. Furthermore, there is the field of axiomatization imposing conditions on the allocation of economic capital. Axiomatization actually does not represent a further situation of allocation on its own but is close related to the field of risk contribution. It aims at defining allocation schemes which are free of inconsistencies. The notion of dynamic risk measures recognizes the economic capital allocation as a dynamic situation. This situation focuses on analyzing the variation of a position’s consumption of economic capital over time if new information arrives or intermediate cash flows accrue. The situation of economic capital allocation as a means of corporate management features a straight ex-ante perspective. Therefore, the field analyses situations where the management assigns economic capital to business units before the business units build investment positions and portfolios respectively. And finally, there is the situation of portfolio optimization under a downside risk constraint. This situation completely disregards influences of decision makers and reduces economic capital allocation to the identification of an investment portfolio maximizing the expected portfolio return under use of the available economic capital.

42 The most relevant situations concerning the present approach represent the exante perspective understanding economic capital allocation as a situation of corporate management and the situation of portfolio optimization under a downside risk constraint. In particular these two fields of research provide useful implications for the current model focusing on the optimal economic capital allocation in banking on the basis of decision rights.

3.2 Risk contribution – a form of economic capital allocation 3.2.1 Risk contribution schemes The risk contribution literature represents a substantial part of the general field of economic capital allocation. The risk contribution approach assesses the consumption of economic capital of the different parts of a given portfolio. This situation also clearly represents a form of economic capital allocation particularly relevant in context with credit risk portfolios. In contrast to the present model, banks fully apply this kind of economic capital allocation. Before this background, it appears advisable to examine the procedures from the field of risk contribution for potential implications for the current model of economic capital allocation. There are different potential allocation schemes in context with risk contribution issues. The most common schemes represent the stand alone, the incremental and the marginal44 approach.45 The stand-alone approach simply disregards the desirable diversification effects arising from the total portfolio. Risk assessments concerning sub portfolios take place completely independent from the rest of the portfolio and consequently ignore the corresponding potential hedge effects. In contrast, the incremental approach does consider diversification effects properly. The incremental approach determines the risk of a sub portfolio by the difference between the total portfolio’s risk with and without the respective sub portfolio. Despite its methodical correctness, the incremental approach exhibits considerable disadvantages. Its resulting risk contributions do not exactly ad up to the total risk of the respective portfolio. Overcoming this problem requires auxiliary calculations. The marginal approach calculates risk contributions by differentiating the sub portfolios’ risk measures with respect to the sub portfolios’ sizes and is mathe44 45

The marginal allocation is also known by the terms Euler allocation or gradient allocation. See Mausser and Rosen (2008), p. 689, for an overview concerning allocation methodologies in context with risk contribution issues.

43 matically based on the Euler theorem. The breakdown of a portfolio’s total risk into its sub portfolios’ risks thereby considers diversification effects properly. Furthermore, the calculated risk contributions perfectly add up to the portfolio’s total risk. Further schemes have their origin in game theory and so far exclusively serve for research purposes. For example Denault (2001) investigates the most promising approach in this context representing the Aumann-Shapley method. The work proves the game theoretic modeling to mathematically result in marginal risk contributions for the most cases. At first sight, the risk contribution methods appear to exclusively address the ex post perspective. However, the methods can also assess potential portfolio increases or decreases representing the methods’ application from an ex ante perspective. Furthermore, Tasche (2004) proves the suitability of the marginal approach of risk contributions in combination with the risk adjusted performance measurement by RORAC. According to Tasche (2004) increasing a sub portfolio’s exposure exhibiting an above-average RORAC results in a local increase of the total portfolio’s RORAC. As a consequence, the risk contribution framework also enables the optimization by RORAC, at least to a certain extent. Nevertheless, independent from the perspective, the starting point of risk contribution considerations always consists in a certain core portfolio. Therefore, optimization within the risk contribution framework always means the optimal adjustment of a core portfolio. The framework is not designed for fundamental optimizations of whole portfolio structures in the sense of portfolio optimization. The current model of economic capital allocation determines economic capital limits instead of risk contributions. An economic capital limit determines the maximum stand-alone risk the respective limit addressee is allowed to take. Particular portfolio constellations first occur after the risk taking by the limit addressees subsequent to the limit allocation. Obviously, this situation describes the state ex ante to potential risk contribution calculations which require particular portfolio constellations as a starting point. But also the risk contribution framework according to Tasche (2004) provides little implications for the present model of economic capital allocation. Tasche (2004) chooses an ex ante perspective aiming at optimizing the total RORAC of a portfolio. However, also Tasche (2004) demands for a particular portfolio as a starting point. The present model of economic capital allocation does not start from a particular portfolio but aims at anticipating the huge amount of potential portfolios arising from the decision taking of the limit addressees. As a consequence, risk contribution schemes seem to offer little implications for the present research. The following chapter verifies this finding by examining particular approaches from the research field of risk contribution.

44 3.2.2 Particular approaches from the field of risk contribution The present chapter describes the particular developments from the field of risk contribution. The chapter ends by discussing the implications of these developments for the current model of economic capital allocation. Important developments from the field of risk contribution concern the use of the marginal approach in combination with quantile based risk measures instead of variance. As a result of the central role of the derivatives of the risk measures within the marginal approach, the derivatives’ calculation of quantile based risk measures represents a key issue of the research. Furthermore, there is the question of whether pure analytic forms of the marginal approach can provide sufficient results. Their advantage lies in their low computational costs. However, simulation based implementations can deal with much more complex and realistic models. There are several successful attempts to reduce their substantial computational needs. The following gives an overview of important approaches from the literature which co-founded these developments. Tasche (2004) points out the appropriateness of the marginal allocation scheme for risk contribution issues.46 The approach investigates the use of VAR instead of variance in combination with the marginal approach. Gourieroux et al. (2000), Tasche (2000) and (2002) particularly address the important calculation of VAR-derivatives. However, especially for the credit risk sector the profit and loss distributions typically exhibit considerable degrees of skewness. Therefore, the methodical weaknesses of VAR for non-elliptical profit and loss distributions also suggest the examination of the marginal approach in combination with the superior downside risk measure expected shortfall (ES) according to Kalkbrener et al. (2004). A central role in context with the derivatives’ calculations of both quantile risk measures VAR and ES plays the possibility of understanding them as conditional expectations. The possibility to calculate the derivatives for VAR and ES on the basis of their expression in the form of conditional expectations gives rise to the idea of a completely analytical determination of risk contributions. Since the credit risk assessment concerns about very rare loss events from the tail, Monte Carlo (MC) techniques are time consuming. Against this background, fast analytic approaches appear very favourable. A very instructive example for these ambitions gives Glasserman (2005) which describes the transfer from MC to completely analytic risk contribution methods in a fundamental manner. An important issue within the field of analytical risk contribution calculation is the proper consideration of the portfolio granularity on the basis of the granulari-

46

Tasche (2008) gives an overview concerning different issues in context with the marginal allocation scheme.

45 ty adjustment (GA) and the alternative method of saddle point approximation.47 Different credit risk frameworks simply assume a perfect portfolio granularity disregarding the idiosyncratic risk of single addresses potentially causing substantial distortions in case of heterogeneous portfolio structures. Martin and Wilde (2002) as Gordy (2003) use the GA in order to dispense with this simplifying assumption of perfect portfolio granularity. For more recent developments on the GA method with a focus on the Basel II framework see Gordy and Lütkebohmert (2007). Gordy and Marrone (2010) apply the GA to more complicated mark-to-market models of portfolio credit risk. An analytic alternative to the GA represents the saddle point approximation by Martin et al. (2001). Martin and Wilde (2002) compare this alternative to the GA. The method uses the specific saddle points in order to derive an approximation of the complete profit and loss distribution which at the same time represents the advantage of the method compared to GA. Giese (2006) addresses the development of a general methodology particularly appropriate for calculations by practitioners. Martin (2009) represents the ES extension to the previous saddle point workouts commonly based on VAR. However, Lütkebohmert (2009) finds boundaries of the saddle point approximation in case of double default effects and confirms MC methods being more efficient if double default effects are involved. The difficulties originating from analytical approaches, e.g. in context with double default effects, guarantee a high relevance of semi- or non-analytical MC based methods for risk contribution calculations. Not least because of additional techniques in the form of e.g. importance sampling (IS) or kernel estimation which can be applied to reduce the MC simulation runtimes. The IS methods successfully attempt to reduce the high rates of simulations which are necessary because of the particularly rare tail events in context with credit risk. IS basically reallocates the probability mass to the tails by different techniques. Kalkbrener et al. (2004) and Glasserman (2005) for example use and explain IS while Glasserman and Li (2005) directly focus on IS issues. Another approach to reduce the simulation rates represents the kernel estimation which approximates the probability density function of discrete random outcomes. The resulting continuous profit and loss distributions then allow for faster risk contribution methods. Gourieroux et al. (2000) investigates on kernel estimation in a Gaussian framework in context with market risk. Tasche (2009) addresses the combination of IS and kernel estimation. In contrast to the risk contribution approaches seeking for fast analytical implementations the current model of economic capital allocation completely bases on 47

See also Lütkebohmert (2009), p. 1, for an overview of these developments.

46 the MC simulation technique in combination with importance sampling (IS). Since the current model requires global solving the MC approach provides the much more appropriate environment for the application of heuristic optimization. So far, analytical approaches to global solving cannot compete with heuristic approaches as long as the underlying optimization problem does not represent a very specific case. Finally, the implications of risk contribution issues for the present research restrict themselves to the speed-up of MC simulation techniques through the use of importance sampling (IS).

3.3 Axiomatization of economic capital allocation 3.3.1 Axiomatization of risk measures The literature on the axiomatization of economic capital allocation generally originates from the literature on the axiomatization of risk measures. As a consequence, the present chapter at first gives an insight concerning issues on the axiomatization of risk measures. A further characteristic of the literature on the axiomatization of economic capital allocation represents its focus on the case of economic capital allocation in the form of risk contribution calculations known from the previous chapter. In general, the involved axioms aim at the determination of desirable properties for appropriate risk measures and capital allocation schemes. In case of risk measures, different axioms determine the coherency property and therefore the category of coherent risk measures. The relaxation of these axioms leads to the further category of convex risk measures. The axiomatization literature on economic capital allocation transfers the coherency and convexity property from the field of risk measures to that of economic capital allocation. Particularly in the case of the coherency property the literature provides a similarly consistent framework of axioms for capital allocation as for risk measures. In contrast, in the case of the convexity property, already the use of convex risk measures in context with economic capital allocation causes substantial difficulties. Hence, so far the literature does not provide a generally accepted axiomatic framework for a convex allocation of economic capital. Despite its closeness to the case of risk contribution calculation, the literature on the axiomatization of economic capital allocation potentially provides implications for the present model of economic capital allocation. In order to introduce this field of research, the present chapter at first describes the axiomatization of risk measures representing a starting point for the axiomatization of economic capital allocation. The subsequent chapter then provides insights concerning the

47 transfer of the axiomatization framework of risk measures to the case of economic capital allocation. The rise of the downside risk measure VAR at the same time revealed several weaknesses of this risk measure. As a consequence, Artzner et al. (1999) introduces four axiomatic properties that risk measures should adhere to for being coherent: Monotonicity, translation invariance, positive homogeneity and subadditivity.48 According to monotonicity the risk of a portfolio has to lie above the risk of a second portfolio which achieves higher results at all times. The axiom of translation invariance demands for the risk of a portfolio to decrease by the same amount this portfolio is added with cash. Furthermore, positive homogeneity requires that if each position of a portfolio undergoes a multiplication by a certain factor, also the risk of the portfolio changes by this factor.49 And finally, the subadditivity axiom demands the sum of the stand-alone risks of two portfolios to be greater than the risk of their consolidated portfolio in any situation. Artzner (1997) and (1999) emphasize that VAR does not feature subadditivity for non-elliptical profit and loss distributions. As a consequence, since the important credit risk sector commonly deals with heavily skewed profit and loss distributions diversification does not necessarily reduce credit risk measured by VAR. Artzner (1997) introduces a remedy in the form of the subadditive and coherent risk measure expected shortfall (ES).50 ES represents the average loss of e.g. a credit deal occurring if the loss exceeds the deal’s VAR. A whole category of coherent risk measures determines Acerbi (2002) in the form of the spectral risk measures which also include ES. Spectral risk measures use weighted averages of the losses exceeding the VAR under the condition that these losses’ weights follow a non-decreasing function of these losses’ quantiles. Föllmer and Schied (2002) as Frittelli and Gianin (2002) present a relaxation of the coherency axioms by the framework of convex risk measures. A simple example in context with liquidity risk reveals the need for this relaxation. Increases of a portfolio in practice commonly result in additional liquidity risk which in turn rather increases non-linear. The positive homogeneity axiom, however, excludes such linear or non-linear increases in risk suggesting the replacement of the positive homogeneity axiom by the convexity axiom. The convexity axiom allows portfolio risk to increase by a higher factor than the portfolio positions are multiplied with. This also includes factors which result in exponential in48 49

50

See e.g. Jorion (2007), p. 113 for a description of the coherency axioms or e.g. Hull (2012), p. 118. Note that the positive homogeneity axiom of Artzner (1999) finally corresponds to the positive homogeneity of degree 1. See also Rockafellar and Uryasev (2000) and (2002), Acerbi and Tasche (2002), and Frey and McNeil (2002) for investigations on expected shortfall. Pflug (2000) gives a comprehensive mathematical proof of the coherency properties of ES.

48 creases of the portfolio risk. The new convexity axiom at the same time induces subadditivity. Therefore, the convexity framework only demands risk measures to satisfy convexity, monotonicity and translation invariance. 3.3.2 Transfer of the axiomatization framework to economic capital allocation Denault (2001) transfers the coherency considerations concerning risk measures to the process of economic capital allocation and also establishes axiomatic properties. As a consequence, a coherent economic capital allocation scheme has to satisfy the axioms of full allocation, no undercut, symmetry, and riskless allocation. Note that the present chapter addresses economic capital allocation in the form of risk contribution calculations. The axiom of full allocation demands for the complete assignment of the economic capital to the involved sub portfolios expressed as percentages.51 No undercut determines that no capital assignment to a sub portfolio or combination of sub portfolios can exceed the stand alone capital requirements of these portfolios. The symmetry axiom requires the capital assignments to the sub portfolios to exclusively depend on these portfolios’ risk. Hence, portfolios exhibiting the same risk are provided with equal amounts of economic capital. Finally, the axiom of riskless allocation demands increases in a sub portfolio’s cash position to result at the same time in equivalent decreases of this portfolio’s capital assignment. Denault (2001) proves the marginal allocation scheme to be the only one for the case of risk measures featuring homogeneity of degree one satisfying all of the established axioms. Thereto, he uses a game theoretic model on the basis of the Aumann-Shapley-value. Despite the game theoretic origin of the approach, its outcome finally mathematically corresponds to that of the marginal allocation scheme. Also Kalkbrener (2005) and (2009) give an axiomatic definition of economic capital allocation. Kalkbrener (2009) focusses on the closeness of the two coherency frameworks of risk measures and capital allocation. Basically, the approach addresses the mathematical prove that the “risk measure of a coherent allocation scheme is coherent and, conversely, for every coherent risk measure there exists a coherent allocation”. Kalkbrener (2009) finds the equivalence of the coherency frameworks of risk measures and capital allocation. Also Buch and Dorfleitner (2008) investigate on the links and overlaps of the two frameworks. Finally, they

51

See also Albrecht (2003), pp. 8-9 and West (2010), p. 22 for descriptions of the axioms of coherent capital allocation.

49 also propose the marginal allocation as an appropriate and coherent allocation scheme. As mentioned at the beginning of this chapter, so far there is no generally accepted convexity framework for capital allocation yet. Tsanakas (2009) reveals already the use of convex risk measures for economic capital allocation to cause considerable difficulties. The investigation mainly addresses three difficulties: The marginal allocation scheme based on the Euler theorem cannot be used in combination with convex risk measures due to their property of not being positive homogeneous. Furthermore, for example entropic risk measures52, a so far mainly academically relevant category of convex risk measures, become superadditive for positively dependent risks. This requires negative correlations for hedging purposes. And finally, the convex risk measures in combination with the necessary game theoretic allocation scheme on the basis of the AumannShapley-value cause incentives to split portfolios in order to reduce risk. Above all Tsanakas (2009) draws attention to these rather fundamental problems and does not focus on the development of an axiomatic framework. However, the analyses show that the Aumann-Shapley-value based allocation scheme can be used for economic capital allocation in the case of convex risk measures under certain limitations. Furthermore, Tsanakas (2009) characterizes the AumannShapley-value based allocation scheme as the general form of the marginal allocation scheme. In general, the literature addresses convexity issues rather in context with the new field of dynamic risk measures than with the development of a convexity framework for economic capital allocation. The essential literature from the category of dynamic risk measures introduces the subsequent chapter. What are the implications of the field of axiomatization of economic capital allocation for the current model? The reference of the formulated axioms to the situation of risk contribution calculation causes an elementary distance between the introduced approaches of axiomatization and the current model of corporate management by risk limit systems. The present model neither focusses on testing nor developing any axioms on risk measures or economic capital allocation schemes. Furthermore, the model generally uses VAR as the default risk measure in order to develop economic capital allocations. The model confines itself to additionally assess the risk of the final economic capital allocations by the coherent and convex risk measure expected shortfall ES. This provides more information allowing for additional plausibility checks. The additional computa-

52

As a characteristic entropic risk measures feature exponential weighting of the losses exceeding VAR and therefore represent a specialization of the coherent spectral risk measures. See Föllmer and Schied (2002) for a detailed description of entropic risk measures.

50 tion of ES makes sense since the model’s relevant profit and loss distributions do not represent exactly elliptic curves. Instead, these distributions exhibit a certain skew resulting from the model’s consideration of the decision makers’ influences. As a consequence, VAR potentially tends to fail considering diversification effects perfectly right. Therefore, the additional risk assessment by ES helps recognizing and excluding inappropriate calculations or miscalculations.

3.4 Risk assessment over time by dynamic risk measures The common risk measures normally use a static setting based on a single period framework. These measures implicitly assume the risk at the beginning of the period to exclusively refer to the respective exposure’s uncertain cash flow at the end of the period. The uncertainty manifests itself in a particular profit and loss distribution attributed to the cash flow. The fact that cash flows might sometimes accrue successively over time remains disregarded. In contrast, the characteristic of dynamic risk measures consists in their multi period modeling. Dynamic risk measures consider a sequence of periods where the exposure can generate further cash flows at the end of each period. Another characteristic of dynamic risk measures consists in their consideration of additional information. Additional information in this case means information which is relevant to the risk assessment and available at the moment of the assessment. A simple example represents the additional information at the beginning of a period to have to account for only half of the potential losses. As a consequence, the risk assessment takes place conditional to that information. This example at the same time introduces the notion of conditional risk measures. The relevant literature determines risk measures as conditional risk measures if they satisfy the regularity condition. According to Detlefsen and Scandolo (2005) a regular conditional risk measure “should depend only on that part of the future which is not ruled out by the additional information”. Finally, the framework of dynamic risk measures extensively uses the notion of conditional risk measures. As a consequence, the relevant literature describes dynamic risk measures to finally represent sequences of conditional risk measures. There are certain affinities between the current model of economic capital allocation and the concept of dynamic risk measures. The current model’s extended version also assesses risk conditional to further information. The information refers to whether the single decision makers have achieved a profit or a loss by their latest trade. This information enters a Bayesian learning algorithm updating approximations concerning the decision makers’ prospects of success. Before

51 providing further information on the affinities between the current model and the concept of dynamic risk measures the chapter introduces relevant approaches from the field of research of dynamic risk measures. Wang (1999) represents an early approach using the expression „dynamic risk measure“ which provides three universal reasons for the application of dynamic risk measures: Intermediate cash flows, the need for a consistent assessment of risk over time considering new information and finally risk related dynamic optimization problems. In general, the literature on dynamic risk measures addresses the relations between the outcomes of different risk assessments over time. The first two reasons from Wang (1999), however, finally represent two research issues which both have a different focus and which at the same time represent the main research issues addressed by the more recent literature.53 One main research issue concerns intermediate cash flows. This branch investigates on dynamic risk measures in the sense of risk measures for processes. The approaches focus on the time value of money. Besides the amounts of the cash flows occurring within the processes, the approaches in particular care about their timing. For example, whether the higher cash flows occur at the beginning or at the end of a process significantly influences the outcome of the risk assessment. More recent approaches focusing on risk measures for processes represent Acciaio et al. (2010b), Jobert and Rogers (2008) and Artzner et al. (2007). The other main research issue represents the investigation of time consistency. Cheridito et al. (2006) use the illustrative description of time consistency in context with their approach by calling “a dynamic risk measure time consistent if it assigns to a process of financial values the same risk irrespective of whether it is calculated directly or in two steps backwards in time”. The time consistency approaches are of axiomatic nature and introduce various axioms aiming at establishing time consistency. The present remarks, however, confine themselves to give a quick overview concerning the field of time consistency issues without going into detail. A certain starting point for the notion of time consistency represents the case of strong time consistency according to Cheridito et al. (2006) from above. The strong form of time consistency allows for the application of the Bellman’s principle or dynamic programming principle respectively.54 For example Artzner et al. (2007) particularly investigates the use of the Bellman’s principle. The principle originally addresses the division of optimization problems into sub problems. The notion of dynamic risk measures relates the sub problems to periods whereby the principle’s recursive approach can be used for time consistency is53

54

Acciaio and Penner (2010a), pp.1-2, provide a review and categorization of the relevant literature on dynamic risk measures. See Bellman (1954) for a description of the principle.

52 sues. Besides the strong form of time consistency many approaches also investigate weaker forms where the application of the Bellman’s principle fails. Analyses focusing on time consistency issues represent Bion-Nadal (2008), Penner (2007) and Tutsch (2008). A further impression concerning the variety of the different investigations from the field of dynamic risk measures give the following potential characteristics of these investigations:55 There are models assuming only one final cash flow and models considering cash flow processes. If the investigation addresses cash flow processes, these processes can be bounded or unbounded ones. The different approaches use the framework of coherent or/and convex dynamic risk measures. Furthermore, the approaches apply discrete or continuous and finite or infinite time frames. Finally, there are approaches assuming finite probability spaces and approaches assuming general ones. The implications of the concept of dynamic risk measures for the current model are limited. Finally, the current model assumes only one period. The mentioned Bayesian learning process applied by higher versions of the model confines itself to update the estimated characteristics of the decision makers only once at the start of the period. Subsequently, the estimators enter the profit and loss simulations of the model. The model then performs the optimal allocation of economic capital according to an expected return maximization to the end of the period. Therefore, also the optimization clearly refers to one single period and does not represent a case of dynamic optimization and dynamic programming respectively. Nevertheless, the present model undertakes the risk assessment and the optimization conditional to the updated approximations of the decision maker’s prospects of success and therefore applies conditional risk measures. As a consequence, the model’s potential extension by a sequence of related executions of the single period model would correspond to the framework of dynamic risk measures for a weaker form of time consistency. The use of Bayesian estimators would represent the special case of uncertain and incomplete additional information. Tutsch (2008) investigates the case of incomplete information and reveals the strong time consistency framework to be too restrictive for updating concepts. Also Mundt (2008) provides “a Bayesian control approach” facing the information’s incompleteness by improving the information’s precision through Bayesian updating over time. A corresponding extension of the model, however, would not share the axiomatic perspective of the approaches from the field of dynamic risk measures. In55

See Mundt (2008), p. 52, for a list of the different characteristics of approaches addressing dynamic risk measures.

53 stead, it would represent a Monte Carlo implementation of the framework of dynamic risk measures. This implementation’s further characteristics would be a bounded process of uncertain cash flows and uncertain information within a discrete and finite time framework on the basis of a finite probability space. The Bellman’s principle could not be applied in context with this model, at least for its weaker form of time consistency. Instead, for an optimization over time, two approaches would be imaginable. The simple one represents a successive forward directed approach corresponding to a sequence of one period based optimizations where each period’s optimization bases on the results of the previous one. In contrast, an integrative approach of real optimization over time would require the optimization over the whole sequence of considered periods potentially causing considerable computational costs. However, such comprehensive model extensions at first require the proper testing of the basic single period model in the form of the present investigations.

3.5 Economic capital allocation as a means of corporate management The most characteristic aspect of economic capital allocation as a means of corporate management represents the idea of delegating risky business. The delegation consists in the allocation of economic capital in the form of downside risk limits, commonly VAR limits. The limits enable the addressees to take risky investment decisions autonomously according to their limits’ scopes. The delegation aims at using the individual information of the decentralized decision makers which exhibit certain information advantages compared to the central management. As a consequence, economic capital allocation in this context obviously takes place from a clear ex ante perspective. In detail, ex ante here means previous to the actual taking of investment decisions by the decentralized decision makers. The present understanding of economic capital allocation therefore fundamentally aims at considering the influence of the involved decision makers. This fundamentally contrasts with the understanding of economic capital allocation of the fields of risk contribution calculation, axiomatization or dynamic risk measures. In case of economic capital allocation as a means of corporate management no longer exclusively the profit and loss distributions of the financial instruments but in particular those of the decision making units have to be considered. These distributions reflect the individual acting of these business units characterized by their private information level, their preferences for long or short positions and their tendency to observe and imitate the trading decisions of their colleagues.

54 Given the complexity of this situation of economic capital allocation, there are little approaches really focusing on the consequences of recognizing decision makers as economic capital addressees. The following introduces approaches which generally address the case of economic capital allocation as a means of corporate management. Nevertheless, these approaches often lack the consistent and comprehensive consideration of the decision makers’ role and influences. The different mechanisms involve: adjustments of the status quo by negotiations, implicit allocation by leaving intermediate profits to the respective decision makers, market mechanisms, allocation through the central management, portfolio theoretical mechanisms and allocation under consideration of individual acting. Several approaches also apply mixtures of these mechanisms. The final part of the present chapter discusses important aspects a consistent and comprehensive approach of economic capital allocation as a means of corporate management should integrate. The simplest allocation mechanism consists in maintaining the status quo.56 The limit setting just orientates by the capital requirements of a business unit’s investment portfolio from the past. Negotiations between the central management and the responsible decision makers determine adjustments of the capital requirements. In this context the literature also uses the terms of top-down or bottom-up approaches. In the best case, such a limit allocation reflects the strategic preferences of the central management. Negotiating skills and power of the decentralized decision makers, however, also influence the outcomes of the negotiations. Finally, the adjustment of the limit system does not correspond to an optimization. Diversification effects cannot be considered in a holistic portfolio theoretical sense. Furthermore, the exclusive use of negotiations does not represent a proper consideration of the particular consequences arising from strictly recognizing decision makers as capital addressees. Beeck et al. (1999) in a sense provide a refinement of the status quo approach by focusing on the consideration of intermediate profits and losses. These intermediate profits and losses constantly increase and decrease the economic capital of a bank and should result in a dynamic adjustment of the limit system. The most obvious form of considering the intermediate profits and losses represents the increase or decrease of the respective organizational unit’s economic capital limit. The additional economic capital provides successful decision makers with a larger range of business opportunities with which they can achieve higher profits and provisions. This corresponds to an implicit process of learning by the central management: The central management in particular passes economic capital to those business units which proved to be qualified in the past. The success and 56

In the following, remarks on the allocation of economic capital by maintaining the status quo, the implicit allocation and market mechanisms base on Burghof and Müller (2007).

55 the failure of decision makers, however, are always subject to randomness to a considerable extent. As a consequence, the present allocation of intermediate profits and losses is subject to the same randomness potentially causing misallocations. The points of criticism from the status quo approach also hold true for the implicit allocation of intermediate profits and losses. Despite the implicit learning effect, the approach cannot be considered to comprehensively integrate the influences of the individual acting of the decision makers. Therefore, the approach also does not truly recognize the decision makers as economic capital addressees with all its consequences. From the perspective of the neoclassical economic theory the best solution for the allocation of a scarce resource always represents a market. In the present case this would be an internal market for economic capital within a bank. For example Williamson (1975) and Stein (1997) investigate capital budgeting through internal markets. Saita (1999) particularly addresses the specific case of economic capital allocation.57 Such a market offers the bank-wide available economic capital to the single organizational units demanding the economic capital to cover their future investments’ risks. Thereby, the market price for economic capital at the same time represents the generally applied hurdle rate of return. Unfortunately the model of an internal market cannot be simply transferred to the case of economic capital allocation. Indeed, market generated capital allocations can be assumed to correctly reflect the expected profits of the single organizational units. However, they generally neglect a comprehensive consideration of diversification effects and can therefore cause undesired risk concentrations concerning promising but also risky business operations. There is no actual optimization concerning the return to risk ratio of the bank. Furthermore, the question arises of whether single decision makers from particularly risky business sectors might be animated to acquire overpriced economic capital since the decision makers are not personally liable but can push their career significantly with high profits. Froot and Stein (1998) point out the interdependencies of the capital structure, the hedging activities and the capital budgeting of a bank by an integrative model of risk management. There fundamental findings are: Banks should hedge any risk which can be hedged at fair market terms. For the remaining illiquid risks they should hold some absorbing economic capital. Finally, in consequence of the scarceness of economic capital, the bank should value illiquid risks similar to an individual investor by the risks’ impact on the bank’s total risk and return with portfolio theoretical rationality. Froot and Stein (1998) emphasize that the total portfolio perspective demands for the joint and endogenous determination of the capital structure, the hedging activities and the capital budgeting. They 57

See Saita (1999), p. 100.

56 conclude that this requires a “well-informed central planner operating out of bank headquarters”. At the same time they recognize a completely centralized decision taking to be unfeasible and disadvantageous. Froot and Stein (1998) identify issues of centralization and decentralization as very important in context with banks. From their opinion, such investigations demand for a less general approach on the basis of practical “capital or position limits for individuals or small units within a bank”. With their model, Froot and Stein (1998) do not present a detailed implementation of a particular allocation mechanism. However, from their general approach’s findings they derive an outline of how such an implementation should fundamentally look like. Finally they postulate an economic capital allocation according to portfolio theoretical aspects. The allocation should address individuals or smaller organizational units as capital addressees and has to apply methods of centralization and decentralization at the same time for proper allocation. Perold (2005) introduces a model of economic capital allocation in financial firms which is driven by a potential deadweight loss of holding economic capital. In case of economic capital which is provided in the form of guarantees this deadweight loss arises from adverse selection and moral hazard. The guarantors face this loss because of monitoring difficulties. In case of economic capital in the form of equity the deadweight loss arises from taxes and “agency costs of free cash flow”. The agency costs stem from the inefficient use of economic capital if the provided amount of capital is too high. The model finds that economic capital allocation should orientate by the net present value of a project’s cash flow discounted by market determined rates and subtracted by the deadweight loss of economic capital. The latter depends on the project’s marginal contribution to the bank-wide risk. Furthermore, the model demonstrates that a bank will diversify in business operations which exhibit similar deadweight losses. Perold (2005) concludes that the model finally suggests a central authority taking investment decisions but does not provide a description or investigation of these processes. Furthermore, the model does not clearly recognize decision makers as the appropriate addressees for economic capital. Instead, the model allocates capital to “projects” while a detailed description of the influences of the individual acting of potentially involved decision makers remains disregarded. Stoughton and Zechner (2007) investigate on economic capital allocation by a very comprehensive model. The model addresses, among other things, also the optimal overall amount of economic capital and hurdle rate issues. In consequence of the model’s comprehensiveness, the following review of the model addresses only the aspects which are relevant in context with the current approach. Stoughton and Zechner (2007) denote themselves to extend the research of Froot and Stein (1998). Therefore, they choose for the framework where a

57 central authority delegates business decisions to decentralized privately informed operational units. In consequence of the underlying case of asymmetric information, the central management applies methods inducing truthful reporting. On the basis of the operational units’ information, the central authority provides each unit with an individualized capital allocation function. The decentralized units choose their exposure to risk dependent to this function. Thereby, the Bayesian Nash incentive compatibility condition finally induces the allocation of economic capital which is optimal in a portfolio theoretical sense. However, the agency theoretical modeling ignores important aspects of the situation of allocating economic capital to decision makers. The allocation only considers private information while the treatment of further characteristics of the operational units, e.g. informational processes between them, remain disregarded. Finally, the chosen modeling does not further consider the use of a limit system. From the perspective of a practical implementation of economic capital allocation as a means of corporate management, however, limit systems represent an unavoidable methodology. Straßberger (2006) clearly recognizes the business units themselves as capital addressees. This implicates that the allocation process of economic capital not only refers to financial instruments but also to decision makers. The approach assumes a non-linear relation between the assigned economic capital and the profit of each business unit. This relation in the form of a continuous convex function guarantees an exact solution to the allocation problem of economic capital. The model assumes allocations which are optimal in a portfolio theoretical sense. Straßberger (2006) dispenses with integrating agency theoretical aspects and generally represents a much less comprehensive approach compared to Stoughton and Zechner (2007). A positive effect of this simplicity consists in the approach’s use of a limit system generating a closer relation to the common banking practice. However, similar to Stoughton and Zechner (2007), the approach assumes mathematical functions to express the return of a business unit in dependence of the unit’s economic capital limits. As a consequence the approach only superficially considers the influences of capital addressees being decision makers. The approach does not provide a closer insight to the difficulties of considering behavioral aspects in context with a consistent corporate management system by economic capital allocation. Laeven and Goovaerts (2004) investigate the optimal amount of economic capital of a financial conglomerate, the optimal allocation of the economic capital among the conglomerate’s subsidiaries and “a consistent distribution of the cost of risk-bearing borne by the conglomerate”. The model assumes a higher authority to implement capital allocations within the financial institution. An important part of the investigations concern the influences of different levels of information of this authority concerning the dependence structure among the subsidi-

58 aries. The approach also provides a model extension in the form of a dynamic setting where Bayesian inference provides information updates concerning these dependencies. The comprehensive framework of Laeven and Goovaerts (2004) indeed suggests the analysis of influences originating from the capital addressees’ individual acting. The model assumes the capital addressees in the form of subsidiaries of the financial institution to be interrelated by a certain dependence structure. However, the approach provides no further details on these dependencies. There is no definition of whether the dependencies represent correlations between financial instruments’ returns or correlations between the business decisions taken by the subsidiaries themselves. Hence, the approach finally also does not provide a deeper insight into issues of allocating economic capital optimal among autonomous decision makers. Burghof and Sinha (2005) investigate the allocation of economic capital by a central authority on the basis of a risk limit system. The central authority delegates the taking of risky business decisions to the decentralized and privately informed decision makers through assignments of economic capital in the form of VAR limits. The approach emphasizes the difference between assessing risk of a portfolio of particular positions compared to the risk assessment in context with VAR limit allocations. Since in the latter case the positions are not yet built, the approach identifies the situation of VAR limit allocation as an allocation of decision rights. Furthermore, Burghof and Sinha (2005) emphasize one of the major challenges of constructing consistent VAR limit systems consisting in these systems’ ability to successfully anticipate the individual acting of the limit addressees. Thereto, the approach draws the elucidating scenario where privately informed economic capital addressees observe the decision making of their colleagues. As soon as the imitation of these decisions appears more promising from a rational perspective than following individual information, the decision makers start to implicitly compose very one-sided bank-wide investment portfolios. This behavior of the decision makers causes weakly diversified portfolios to occur much more often as expected according to the neoclassical theory. In order to prevent considerable risk underestimations the VAR limit system has to anticipate these behavioral influences properly. Burghof and Sinha (2005) in particular identify weak market trends to bear considerably more risk than expected from the neoclassical point of view. Uncertain market trends not necessarily induce rather diversified portfolios consisting of long and short position. Instead, uncertain market trends also potentially induce a higher threat of decision makers entering informational cascades by imitating the trading decisions of their colleagues. Furthermore, in times of uncertain market trends the probability of entering a disadvantageous informational cas-

59 cade lies much higher than in times of distinctive trends. The approach clearly identifies the need to focus on the correlations between decisions instead of those between financial instruments’ returns in order to assess the inherent risk of VAR limit allocations. Finally, Burghof and Sinha (2005) emphasize that the neoclassical understanding of assessing risk known from common portfolio theory cannot be simply transferred to the case of economic capital allocation as a means of corporate management. The current model is closely linked to the approach of Burghof and Sinha (2005). Among the introduced approaches Burghof and Sinha (2005) represents the only one strictly focusing on the direct consequences of allocating economic capital among autonomous decision makers. The current model therefore adopts the interpretation of the risk limits as decision rights. It addresses the postulation of Burghof and Sinha (2005) to anticipate the identified potential informational processes by the VAR limit setting in order to widely prevent costly risk adjustments which potentially become necessary ex post to the position taking. Additionally to Burghof and Sinha (2005), the current model addresses also the optimization of the limit system according to risk and return aspects in a portfolio theoretical sense. In this context, the limit system should not only consider the correlations between the business decisions of the decentralized business units but also the units’ individual information and prospects of success respectively. The consideration of all these issues requires the simultaneous centralization and decentralization postulated by Froot and Stein (1998). The proper VAR limit setting demands for a central data base of comprehensive information about the decentralized decision makers’ properties. Beyond that, the optimization of VAR limits is technically challenging. If the capital requirements are realistically assessed by VAR, the optimization inevitably involves global optimization and solution spaces with many local extreme points. In contrast, Burghof and Sinha (2005) and many other approaches from the literature do not address aspects of optimization. Another category of approaches simply uses mean variance frameworks or/and agency theoretical mechanisms confining themselves to highly abstract considerations of the relevant optimization issues. The proper investigation of the influences of the informational processes and the behavioral characteristics on the construction of efficient VAR limit systems, however, demands for an optimization algorithm being capable of solving the kind of optimization problems actually arising during these investigations. One major aspect of the current model, therefore, represents the introduction and use of a corresponding optimization algorithm.

60

3.6 Portfolio optimization under a downside risk constraint 3.6.1 Approaches on the basis of traditional methods of optimization In fact, portfolio optimization under a downside risk constraint does not describe the situation of economic capital allocation by VAR limits in the sense of a corporate management system. Instead, it rather corresponds to the situation of a limit addressee aiming to use his assigned risk limit optimal according to common portfolio theoretical aspects. In contrast, within the current model the portfolio theoretical optimization of the assigned risk limits themselves is in the focus. Nevertheless, in consequence of the fundamental similarities of both notions, also the complexity of their underlying optimization problems can be expected to be of a similar level. For non-elliptical loss distributions, the portfolio optimization under a VAR constraint, for example, represents a global optimization problem involving solution spaces with multiple local extreme points. It therefore appears promising to examine the approaches from the literature addressing portfolio optimization under a downside risk constraint in order to learn about potentially suitable optimization methods for the current model. The optimization methods of these approaches might also be applicable for the solving of the current limit setting problem. The following introduces approaches using traditional optimization methods in the form of linear optimization or gradient based methods for the portfolio optimization under a downside risk constraint and discusses these optimization methods’ suitability for the current model’s optimization. Rockafellar and Uryasev (2000) present a linear approach to VAR minimization on the basis of the calculation of ES. To be precise, they at first minimize the coherent and convex risk measure ES and then transform it into VAR. Since ES technically constantly exceeds VAR, this method at the same time represents a kind of VAR minimization. Krokhmal et al. (2001) show that the method can also be applied to the maximization of e.g. expected return under an ES constraint instead of using it for the minimization of ES. Rockafellar and Uryasev (2000) find that the actually non-linear problem of minimizing ES can be reduced into a linear one in cases where uncertainty originates from a finite set of scenarios as an approximation of the reality. This reduction to linearity is especially useful for portfolios with more than 1k single positions. For smaller portfolios heuristic optimization methods with stochastic elements require similar efforts. Therefore, the approach provides a fast method for the optimization of portfolio ES and VAR in context with large portfolios.

61 While the first version of the approach exclusively uses continuous loss distributions Rockafellar and Uryasev (2002) also successfully apply the linear shortcut by ES to discrete loss distributions. Many of the following approaches from literature address the new method of optimizing ES and VAR by the introduced linear shortcut. However, the investigations also deal with the boundaries of the ES-based linear shortcut. Gaivoronski and Pflug (2005) emphasize the necessity for investors, who depend on VAR measurement, to directly optimize VAR instead of using e.g. ES or variance as a substitute because of substantial differences among the resulting efficient frontiers. Also e.g. Gilli et al. (2006) illustrate such differences. Admittedly, Larsen et al. (2001) show that starting from a portfolio with minimum ES, the further minimization of VAR regularly leads to substantial risk increases if the respective portfolio’s risk is assessed by ES. Since ES represents the average loss for losses exceeding the respective VAR, this fact is very important. Also Maringer (2005a), (2005b) and Maringer and Winker (2007) investigate similar risk increases stemming from VAR minimization. However, bank leaders are likely to consider these facts being irrelevant if the resulting VAR-minimizing portfolios promise higher short term profits and furthermore if VAR represents the crucial risk measure from the regulatory perspective. Gourieroux et al. (2000) directly address portfolio optimization under a VAR constraint without using the ES shortcut. The approach focusses on investigating the VAR differentiation on the basis of a Gaussian framework. The resulting analytical forms then undergo a transformation to a more general approach. Finally, Gourieroux et al. (2000) successfully apply their method within a simple 2 stock example of convex portfolio optimization by Lagrange multipliers. The optimization uses a parametric Gaussian loss distribution and also a nonparametric loss distribution estimated on the basis of the Gaussian kernel. At least for guaranteed convex frameworks the approach’s analytical form represents a reasonable and fast method for portfolio optimization under VAR constraint. However, Gilli and Schumann (2011), Gilli and Schumann (2010) and Gilli et al. (2006) quote in context with their investigations on heuristic optimization methods that exact traditional approaches of optimization under VAR and ES constraints in the form of the introduced ones significantly lack in flexibility. If the underlying model requires slight modifications for particular investigations the traditional approaches are very likely to become inapplicable. This is for example the case for the extension of the optimization problem by realistic integer constraints. For example cardinality constraints restricting the overall amount of the positions in a portfolio, constraints on the number of shares per portfolio position and also limits on transaction costs. Since VAR is not convex at all for the

62 general case of non-parametric loss distributions, the use of the exact traditional methods according to the approach of Gourieroux et al. (2000) will inevitably fail in context with less simple example portfolios. For larger numbers of portfolio components, already the mere use of the non-coherent and non-convex risk measure VAR, without any further integer constraints, involves the typical solution spaces of non-convex problems with multiple local extreme points requiring global optimization methods. In this case, a traditional gradient based optimization method has at least to be extended by certain restart strategies in order to achieve a satisfying coverage of the solution space. The current model uses the risk measure VAR by default. This is due to the fact that VAR still occupies a central position in risk management and for regulatory issues of banks, despite its known weaknesses. Furthermore, with the current model there cannot be guaranteed that all potentially arising solution spaces during the investigations are of convex nature. Which kind of solution spaces have to be faced by the model’s used optimization method is unknown to a certain extent. Indeed, the model’s use of normal distributed stock returns at first even suggests a coherent framework. The influences of informational processes and behavioral characteristics concerning the decision makers, however, transform these returns into non-elliptical skewed distributed returns of the business units. In consequence of the current model’s focus on these business units’ returns the underlying optimization problem must be expected to be non-convex. Furthermore, the current model does not exactly correspond to the situation of common portfolio optimization. The necessary modifications in context with the model’s demand for optimizing VAR limits instead of investment positions also potentially bears impediments for applying an exact traditional optimization approach. For example, the simulation of the informational processes and behavioral characteristics concerning the decision makers demands for a discrete and scenario based modeling. These simulation processes include Bayesian inference techniques wherefore also the use of the linear ES shortcut method appears too inflexible and complicated if not impossible. The use of the ES shortcut appears less appropriate not least because of the mentioned fact that its VAR solutions significantly differ from those achieved without using the ES shortcut. Finally, transforming the discrete scenario-based modeling of the present approach into a closed form enabling traditional optimization methods appears unrewarding. 3.6.2 Heuristic methods of optimization 3.6.2.1 Categorization of the field of heuristic optimization Before focusing on particular approaches using the more flexible heuristic optimization methods for portfolio optimization under a downside risk constraint,

63 the following gives a short introduction to heuristic optimization in general. The literature provides several slightly different possibilities of defining and classifying heuristic optimization. The following orientates by Gilli and Winker (2009).58 Instead of traditional optimization methods heuristics do not aim at exact but approximate solutions to the particular problem. Nevertheless, heuristic methods strictly exclude subjective procedures. Their solution accuracy should at least increase for increases in the extent of the applied computational resources which can, in its simplest form, represent further randomized reruns of the algorithm. In the best case, the difference between the approximation and the actual solution is infinitesimal. In order to achieve high quality approximations of the solutions, heuristics commonly apply stochastic procedures allowing them to efficiently tackle global optimization without the complete enumeration of all feasible solutions. Furthermore, the use of heuristics is rather flexible and does barely require certain additional assumptions or adjustments often necessary in context with traditional optimization methods in order to guarantee their applicability to particular problems. The two main categories of heuristic optimization algorithms represent on the one side the greedy algorithms, also called constructive methods, and on the other the local search methods. The greedy methods investigate their current solution’s neighborhood for modifications of the decision variables which result in the strongest change of the objective value. This change can represent a maximum increase or decrease depending on whether considering maximization or minimization. The respective neighborhood solution then serves as the starting point for the follow-on search step in any case. The algorithm commonly ends after a particular number of iterations but can also use different stopping criteria. In contrast, local search methods accept neighborhood solutions for any improvements, not only maximum ones. Furthermore, they include rules enabling even the acceptance of worse neighborhood solutions to a certain extent. Their stopping criteria finally correspond to those applicable in context with greedy methods. The local search methods’ acceptance of minor improvements and even backward movements enables the algorithms to avoid getting stuck in the very next local optimum and approach the global optimum instead. Similar to portfolio optimization under a VAR constraint also the optimization of VAR limit systems is expected to require global optimization means. Therefore, the focus in context with the current model is exclusively on heuristics from the category of local search methods.

58

See also Maringer (2005a), pp. 38-76 for an introduction to heuristic optimization.

64 Heuristics in the form of local search methods can be classified into trajectory methods, population based methods and hybrid meta-heuristics. The trajectory methods consist of one or more single search processes which start randomly throughout the search space. Each search process works its way through the solution space following a kind of a trajectory independent from the other search processes. The best performing search process provides the final approximation to the actual solution. In contrast, population based methods consider a whole population of solutions at once. The methods develop the population across the single search steps by comparing the single solutions with each other and reusing or combining the best of them. The resulting solutions represent the starting population for the subsequent step in the population’s development and search process respectively. In doing so, the modification of one single solution does not necessarily enable backward movements. The methods’ avoidance of getting stuck in local extreme points can also originate completely from the joint development processes of the solution population. These development processes follow, among others, genetic evolutionary rules or rules derived from the observation of ant colonies. A third class of local search heuristics, the hybrid meta-heuristics, particularly addresses the combination of different local search methods, e.g. trajectory with population based methods. A more relaxed interpretation of this third class also includes hybrid methods combining heuristics with traditional methods and therefore e.g. gradient based and population based methods. The hybrid metaheuristics are expected to fit wide ranges of different optimization problems without requiring significant methodical modifications. 3.6.2.2 Approaches on the basis of heuristic optimization methods Maringer (2005b) uses a hybrid meta-heuristic in the form of the memetic algorithm (MA) for portfolio optimization under a VAR constraint.59 The underlying investigation addresses the impact of applying empirical instead of parametric loss distributions for portfolio optimization. Moscato (1989) and (1999) introduce MA as an approach where the interaction between the single solutions is driven by the competition between these solutions. This leads to a simplified selection process among the solutions compared to e.g. genetic evolutionary selection rules. The algorithm combines the idea of developing a population of solutions on the one side with trajectory methods on the other. Maringer (2005b) therefore uses the trajectory method of threshold accepting (TA). The characteristic of TA consists in the acceptance of worsened intermediate solutions according to a deterministic threshold value which gradually decreases to zero dur-

59

See for example Gilli and Winker (2009), p. 96 for an introduction to MA.

65 ing the optimization process.60 Another example for an application of MA on the basis of TA represents Maringer and Winker (2007) showing that optimizing bond portfolios under a VAR constraint can result in significant risk increases. Hybrid approaches in the form of MA provide certain advantages compared to, for example, mere population based approaches. Completely population based methods lack in an efficient search behavior during the very last steps of an optimization process. Perea et al. (2007) therefore compare a genetic with a hybrid memetic algorithm focusing on such issues. Also Maringer (2005a) discusses advantages and disadvantages of using MA.61 An advantage of MA compared to a mere trajectory based method represents its joint population development enabling the keeping of already gained experiences. In contrast, within completely trajectory methods experiences of the independent search processes remain unused which potentially causes redundant work. Compared to completely trajectory based methods, MA seems to be particularly appropriate if there are countless local extreme points, if there is no information concerning appropriate step sizes in order to efficiently move through the solution space and if the distance and the direction to the optimum is widely unknown. Finally, using MA at first seems to represent a very efficient way of optimization. However, its population based elements also show certain disadvantages. The interaction between the single solutions generates higher computational costs and increases the complexity compared to completely trajectory methods. Furthermore, the solutions’ interdependence can also guide the whole search process into a disadvantageous direction. The MA’s interdependence of the intermediate solutions also complicates the application of parallel computing. For example if the parallelization designates one servant62 per solution, each search step requires the communication between all the servants in order to compare and select the population’s solutions. Besides the complexity of implementation this also increases the communication overhead counteracting the speed benefits of the parallelization. In case of completely trajectory based methods, the single search processes do not interact at all. This maximizes the granularity of the computations and, at the same time, the parallel computing benefits. Admittedly, there is also the possibility to let each servant develop its own smaller population without applying inter-servant communication. However, this modification resembles very much the complete60

61 62

An alternative trajectory method represents simulated annealing (SA) which replaces the deterministic threshold by a stochastic one. For an introduction to SA see e.g. Maringer (2005a), pp. 55-56 or Gilli and Winker (2009), p. 88-89. See Maringer (2005a), pp. 61-63 in context with a further introduction to MA. Parallel computing requires the division of a program at least into a master and several servants. Basically, the master allocates jobs to and receives results from the servants while the servants perform the main part of the computational work in parallel.

66 ly trajectory based methods. The benefit of the population development processes might become too small to justify their high complexity of implementation. Gilli et al. (2006) use the completely trajectory based method of threshold accepting (TA) in order to minimize the portfolio VAR. The technical and methodical demands of the risk minimizing approach finally correspond to those of maximizing the expected return of a portfolio under a downside risk constraint. The approach’s applied optimization method appears also promising for the use in context with optimal economic capital allocation. Gilli et al. (2006) quote that Althöfer and Koschnick (1991) prove the convergence of the TA-algorithm to the global optimum given an “appropriate threshold sequence” but do not provide a clear description of how to determine such a sequence. Gilli et al. (2006) close this gap by introducing a data driven determination of the thresholds. The resulting data dependent thresholds ensure an appropriate movement of the algorithm through the solution space. A more detailed description of the data driven determination of the thresholds provide Gilli and Winker (2009). The determination of the threshold sequence introduced by Gilli et al. (2006) enables a save and convenient use of TA whose proper parameterization can be challenging otherwise. A further investigation on the quality of this enhanced TA approach present Gilli and Schumann (2011) proving that the differences between the algorithm’s approximations and actual solutions can be reduced to insignificantly small ones, at least insignificant for practical purposes. The current model follows Gilli et al. (2006) by using the completely trajectory based method in the form of TA. The straight structure of TA enables a convenient preparation of the algorithm for the use of parallel computing. Due to the absence of population development processes there is no need to consider issues of inter-servant communication. Potential advantages in efficiency from using MA instead of TA are not relevant in context with the current approach. The focus of the approach is not on the improvement of heuristic optimization itself. Therefore, the used algorithm only has to provide high quality solutions on the basis of acceptable runtimes which Gilli et al. (2006) and Gilli and Schumann (2011) clearly confirm by their investigations. At least, they do so for the case of downside risk minimization. However, this can be expected to holds true also for the case of optimal economic capital allocation, despite several necessary modifications concerning the TA-algorithm’s implementation. Furthermore, the MA does not necessarily represent the superior optimization method. The implementation of MA possibly does not provide any advantages in the solution quality compared to a straight TA approach. This finally depends on the structure of the underlying program. Recent discussions of heuristic optimization, for example Gilli and Schumann (2010), do not clearly characterize

67 MA being superior to TA and vice versa. For the moment, this rather suggests very similar quality levels of their solutions.

69

4

BASIC MODEL OF OPTIMAL ECONOMIC CAPITAL ALLOCATION

4.1 Qualitative description of the model In order to be able to investigate the optimal allocation of economic capital, the current model uses a model bank which is very similar to a trading department.63 The model bank consists of a central management in the form of a central planner and decentralized business units in the form of decision makers which finally represent stock traders. The stock traders just symbolize any other possible kinds of organizational units with their business operations. However, the use of stock traders allows for an illustrative and technically manageable design of the model. The central planner delegates risky business decisions to the traders in order to benefit from their private information. Their private information represents special knowledge and information advantages originating from the traders’ closeness to the market. As a consequence, the central management operating far away from the actual market activity cannot develop the special knowledge by itself. The approach provides the model bank with a given amount of economic capital and a given amount of investment capital. The central planner has to allocate the given amount of economic capital among the traders ensuring an allocation being efficient in a portfolio theoretical sense. At the same time, the planner also has to consider the restricted amount of available investment capital. The allocation of economic capital manifests itself in the assignment of VAR limits to the traders. VAR represents the model’s standard risk measure. Later investigations additionally compute ES in order to provide a further possibility to check on the plausibility of the respective investigation’s results. The traders can then use the investment capital to build investment positions autonomously according to their VAR limits. As a consequence, the VAR limits finally represent decision rights. The traders are free to build a long or a short position in order to be able to follow their private information. They only have to fully use their VAR limits without leaving parts of the limits unexploited.

63

See e.g. Burghof and Sinha (2005) and Beeck et al. (1999) for similar designs of the model bank in context with investigations on limit systems and economic capital allocation in banking.

70 Each trader trades in exactly one particular stock of the S&P 500 by opening and closing a position at the beginning and end of each trading period in any case. The position building process thereby depends on behavioral characteristics of the traders and the informational processes between them. These characteristics and informational processes represent the traders’ probabilities of success and their property of observing and potentially imitating the trading decisions of their colleagues while disregarding their private information. The model’s basic version of the current chapter, however, exclusively addresses the characteristic of the decision makers of being differently successful and therefore the traders’ heterogeneous levels of private information. The central planner has to anticipate the traders’ behavior by his limit setting in order to induce the traders to compose as beneficial bank portfolios as possible according to risk and return aspects. As a consequence, the central planner requires detailed information concerning the correlations between the returns of the traders’ investment positions. Nevertheless, the traders are free to choose between long and short positions causing unstable correlations. As a consequence, the simulation of the trading activity is necessary to be able to appropriately face and anticipate the correlations’ instability resulting from the autonomous decision making. While the model’s basic version assumes the central planner to be able to perfectly simulate the model bank’s trading activity by default, during the later investigations the central planner has to gather the input parameters of the trading simulation by Bayesian learning. The Bayesian learning uses the information whether a decision maker achieves a profit or loss in order to draw inferences concerning the respective decision maker’s probability of success. As a consequence, besides the need for the decentralization of decision making, there also is the need for the comprehensive centralization of information concerning the decision makers’ characteristics. This centralization of information concerning the traders’ characteristics finally enables the central planner to set the risk limits optimal in a portfolio theoretical sense. Within the model, the precision of this information can be varied through assuming different intensities of Bayesian learning. Finally, the present model represents a static approach in the form of a one period game. At the beginning of the period, the central planner optimizes the VAR limits on the basis of trading simulations. Subsequently, the planner assigns the limits to the traders which then autonomously take their investment positions according to their behavioral characteristics. The period ends with the closing of the complete portfolio of the bank.

71

4.2 Determination of the underlying stochastic program The stochastic program of the current model refers to the class of static programs and describes a typical decision situation under uncertainty in the form of a maximization of an expected value.64 The uncertainty originates from the future stock returns and the traders’ behavior and investment decisions respectively. The central planner of the model bank therefore has to find a set of VAR limits vl anticipating the uncertainty represented by ξ optimal in order to maximize the bank’s expected profit µbank. In doing so, the central planner has a certain idea of all the probability distributions the uncertainty finally originates from. This enables the central planner to simulate the future trading activity of the bank and evaluate the advantageousness of particular VAR limit allocations vl. The following term denotes the objective function of the respective stochastic program: (4.1)

max µ bank ( vl ) = E[ f ( vl, ξ )] ,vl

n 65

.

Note that any cursive written variables denote single values while non-cursive bold written variables denote vectors. Therefore, vl represents a vector of n VAR limits and decision variables respectively. Furthermore, ξ represents a random n-vector referring to the trading simulation. The program’s constraints remain disregarded for the present. Despite the numerous possible future states the bank has to face concerning price movements and trading decisions, the modeling assumes a finite number of realizations of the random vector ξ. As a consequence, the mapping of the future represents a finite set of trading scenarios ξ1, …, ξk, …, ξK where each scenario ξk denotes one realization of ξ and therefore a vector of multivariate random data. This finite and discrete mapping of the future has to be applied for reasons of simplification and manageability of the stochastic program. The single trading scenarios occur with the probabilities pk. The discretization transforms the original problem into its deterministic equivalent given by the summation of the scenarios’ outcomes in the form of K

(4.2)

E[ f ( vl, ξ )] = ∑ p k f ( vl, ξ ) . k =1

64

65

For an introduction to stochastic programming see e.g. Birge and Louveaux (1997), Marti (2005) and Shapiro et al. (2009). The chosen notation for stochastic programming basically orientates by that of Shapiro (2009).

72 Though provided in a discrete form now, the stochastic program is still hardly manageable for very high values of K denoting the number of trading scenarios. In order to approach reality as well as possible K might be chosen almost infinitely high. A more efficient alternative method to approach reality represents the use of a randomized set of a number of m representative scenarios generated by Monte Carlo simulation and sampling techniques. This method is known as sample average approximation (SAA) and maps the future of the model bank by a reduced set of trading scenarios ξ1, …, ξi, …, ξm with m < K.66 The scenarios are independent and their random data originates from identic probability distributions (i.i.d.). On the basis of these i.i.d. vectors of random data µbank can now be approximated by (4.3)

1 m ∑ f ( vl, ξ i ) . m i =1

µ bank ( vl) =

Finally, the SAA involves turning away from solving the real optimization problem in favor of an assessable approximation. For comprehensive stochastic programs this method represents the only possibility for identifying reasonable strategies in order to anticipate upcoming uncertain events. The following describes the complete stochastic program including the applied constraints. 1 m 1 m n f ( vl, ξ i ) = ∑∑ vl j ξ i , j ∑ m i =1 m i =1 j =1

(4.4)

maxn µ bank ( vl) =

(4.5)

 1 if pl bank, i < −vl bank where bbank, i :   0 else

(4.6)

 1 if pl j , i < −vl j and b j , i :   0 else

(4.7)

subject to

(4.8)

1 m ∑ bi, j ≤ α , m i =1

(4.9)

∑w

vl∈R

1 m ∑ bbank, i ≤ α , m i =1

n

i, j

≤1

j =1

(4.10)

and vl min ≤ vl j ≤ vl max .

As already mentioned, the objective function (4.4) represents the bank's expected profit µbank. Vector vl provides decision variables in the form of VAR 66

See Shapiro (2009) for detailed information about the MC based SAA.

73 limits vlj. Within these limits the business units and traders respectively can take long and short positions autonomously. The bank's expected profit µbank results from the average of m (i = 1, …, m) Monte Carlo simulations of the bank's trading activity and the respective profits and losses n

(4.11)

pl bank, i = ∑ pli , j . j =1

where pli,j denotes the profit and loss of business unit j and results from (4.12)

pli , j = vl j ξ i , j .

Factor ξi,j represents the outcome of the trading simulation. The present chapter provides a detailed examination of the generation of this factor below. The binary variable (4.5) takes the value one if plbank,i exceeds the bank's given and constant total VAR limit vlbank. In all other cases the variable's value is 0. A further binary variable (4.6) refers to each business unit's VAR limit vlj. Both binary variables' outcomes define whether the side conditions (4.7) and (4.8) are kept. These conditions result from the confidence level β = 0.99 applied in context with the VAR computation where α = 1 - β = 0.01. Furthermore the budget constraint (4.9) restricts the total investments of the bank to the given amount of investment capital cbank on the basis of the investment positions’ weights wi, j. Since there are also short positions, the budget constraint uses the absolute values of the position weights. For reasons of simplification the model assumes short positions to require the same amount of investment capital as long positions. This corresponds to a very conservative safety rule ensuring a high coverage of the short positions. Finally, condition (4.10) restricts each business unit's VAR limit vlj to a particular interval to exclude the assignment of unrealistically low or high limits. For a closer look at the trading simulation factor ξi,j the following introduces further variables, terms and formulas. (4.13)

pli , j = vl j ξ i , j = wi , j c bank ri , j ζ bank

(4.14)

where ci , j = wi , j c bank

(4.15)

and ci , j =

(4.16)

(4.14) in (4.15): wi , j =

(4.17)

(4.16) in (4.13): pli , j = vl j

vl j rα ,i , j

. vl j rα ,i , j cbank

.

ri , jζ bank rα ,i , j

.

74

(4.18)

⇒ ξ i, j =

ri , j ζ bank rα ,i , j

.

The first formula (4.13) represents a more illustrative alternative concerning the determination of each business unit’s profit and loss. In this context, the already introduced variable wi,j represents the position weight of a business unit. Its multiplication by the bank’s available investment capital cbank results in the respective unit j’s investment position. The position in combination with the simulated multivariate return of the unit’s traded stock ri,j leads to the respective profit and loss. The final multiplication with the importance sampling (IS) scaling factor ζbank is due to technical reasons. Further information on the use of IS in the form of scaling in context with the current model gives the later chapter on the out-ofsample backtesting and the references therein. Furthermore, (4.14) and (4.15) represent two alternative formulas for the investment positions ci,j of the business units. Formula (4.15) introduces the new variable rα,i,j representing the conversion factor in order to change VAR limits into investment positions and vice versa. A detailed description of the factor gives chapter 4.3.2. Plugging (4.14) into (4.15) provides the position weight wi,j while plugging the weight’s term (4.16) into (4.13) finally results in the more informative formula of the simulation trading factor (4.18). Therefore, the trading simulation factor ξi,j depends on the simulated multivariate return ri,j of the unit’s traded stock, the IS scaling factor ζbank and the conversion factor rα,i,j. Besides the multivariate return also the conversion factor stems from a simulation. While the return, however, stems from a geometric Brownian motion (GBM), the conversion factor in turn depends on the GBM return involving further simulation processes. A complete disclosure of this dependence and simulation processes give the subsequent chapters.

4.3 Valuation of the objective function on the basis of a trading simulation 4.3.1 Simulation of the stocks’ returns The central planner uses a trading simulation in order to evaluate the advantageousness of VAR limit allocations with regard to the expected profit of the bank. Thereto, the planner at first simulates the returns of the decision makers’ traded stocks ri,j by a geometric Brownian motion (GBM). For reasons of simplification the model assumes the central planner to simulate the stock returns on the basis of the normal distribution. The implementation of the current model, however, also allows for the use of any other (asymmetric) distribution or even

75 the direct use of historical returns not requiring any distribution assumption at all. The central planner estimates the parameters for the return distributions of the simulation on the basis of historical data.67 Therefore, the parameters’ estimation uses the returns of one year from the past given by K = 252 daily returns. As a consequence, the following investigations constantly address the optimization of the economic capital allocation for one particular day, the next upcoming trading day in the model. This static character of the model makes the use of a time indicating variable t unnecessary which is why the use of a corresponding variable remains disregarded by the following. In detail, the central planner uses the historical returns of the n stocks to estimate their means, standard deviations and correlations by the following formulas. Note that variable r in this context denotes historical returns instead of simulated ones. (4.19) (4.20)

µj = σj =

1 K

K

∑r

j ,k

where K = 252 and n ( j = 1, ..., n) ,

k =1

1 K ∑ (r j , k − µ j ) 2 K − 1 k =1 K

∑ (r

i,k

(4.21)

and ρ i , j = k =1

− µ i )(r j ,k − µ j )

σ iσ j

where n (i = 1, ..., n) .

On the basis of the correlation matrix the central planner can now turn a vector of independently distributed normal random numbers ε of length n into one of multivariate distributed normal random numbers εm of the same length by the matrix-vector multiplication (4.22)

εm = Lε.

Matrix L represents the lower triangular matrix of the correlation matrix Ρ. Therefore, Ρ has to be decomposed by the Cholesky decomposition according to (4.23)

67

68

0  l 11 l 21 l 31   l 11 0    68 Ρ = LLT =  l 21 l 22 0  0 l 22 l 32  . l    31 l 32 l 33  0 0 l 33 

The model uses stocks from the S&P 500. See Appendix 1 for the stocks’ list. The investigations’ part of the present work provides more information on the exactly used historical returns. See Bock and Krischer (2013) for a description of the Cholesky decomposition. See e.g. Jorion (2007) pp. 322-323 for an alternative description.

76 The following formulas describe the decomposition which can exclusively handle positive definite matrices corresponding to the correlation matrix of the current model. (4.24)

For j = 1, …, n and i = j + 1, …, n do

(4.25)

l j , j = ρ j , j − ∑ l 2j ,k

j −1

k =1

(4.26)

j −1 and l i , j = 1  ρ i , j − ∑ l i ,k l j ,k  .

l j, j 

k =1



Formula (4.25) refers to the values of the diagonal of matrix L and formula (4.26) to the matrix’s values of the lower triangle. In order to be able to calculate each value of L the calculations have to start at the left upper corner and then proceed row by row. After generating the vector εm of multivariate normal random numbers now the formula of the GBM can be applied in order to generate the simulated returns ri,j of the stocks. (4.27)

ri , j = µ j ∆t + σ jε m,i , j ∆t

(4.28)

or ri , j = σ jε m,i , j ∆t

(4.29)

where i = 1, …, m and j = 1, …, n.

Formula (4.27) additionally considers the n stocks’ expected returns µj. Since the model applies daily returns, these expectations are very small and therefore insignificant. Consequently the current model simply uses formula (4.28). Furthermore, the time increment ∆t here corresponds to 1 since the means, standard deviations and correlations of the stocks’ returns already refer to the trading period of one day. This finally reduces the present model’s formula of the GBM to (4.30)

ri , j = σ j ε m ,i , j

where i indexes the simulation iteration. Therefore, the central planner also has to draw m different vectors of independently distributed normal random numbers εi in order to be able to generate the corresponding amount of multivariate vectors εm,i. 4.3.2 Simulation of the business units’ profits and losses The previous simulation of the stocks’ returns represents the starting point for the simulation of the position building by the traders. The position building by the traders finally results in the determination of the conversion factors rα,i,j. The current chapter addresses these conversion factors’ exact composition.

77 The traders represent privately informed decision makers and therefore anticipate the simulated stock returns right with certain probabilities of success of p ≥ 0.5. These probabilities and private information respectively can also be interpreted as different levels of skills of the traders. These skills decide whether a trader builds a long or a short position. The following illustrates the structure of the trading simulation leading to one single trader’s profit or loss. ri,j

pj

pu

1 − pu

1− p j

pj

long

short

−F

short

rα ,i , j =

rα ,i , j = −1 sim r , j

(α + 1 / m)ζ units

−1 sim r , j

F

1− p j

rα ,i , j =

rα ,i , j = −1 sim r , j

( β )ζ units ⇒ ci , j =

long

F

( β )ζ units

−1 sim r , j

−F

(α + 1 / m)ζ units

vl j rα ,i , j

⇒ pli , j = ci , j ri , jζ bank profit

loss

profit

loss

Figure 4.1: Structure of the simulation of the business units’ profits and losses With probability pu = 0.5 the simulated stock return ri,j at the top of the tree structure represents an upward and with probability 1 - pu = 0.5 a downward stock price movement. The trader j anticipates the price movement according to his skill level pj by building a long or short position. The extent of the position depends on the trader’s assigned VAR limit. In order to determine the position in detail, the trader calculates the conversion factors for long and short positions (4.31)

−1 rα ,i , j = − Fsim r, j (α + 1 / m)ζ units

(4.32)

and rα ,i , j = Fsim−1 r, j ( β )ζ units

78 converting the respective VAR limit into the investment position. There is the assumption that the central planner provides the traders with the empirical cumulative distribution functions (CDF) of the stock’s simulated returns. The traders finally determine the conversion factors rα,i,j on the basis of the inverse functions Fsim−1 r, j of these empirical CDFs. The factors correspond with the lowest or highest return just within the applied confidence level β where β = 1 – α. Depending on whether the position is long or short the confidence level shifts and the required return stems from the left or right tail of the empirical CDF. The term α + 1/m from formula (4.31) indicates the corresponding quantile at the left tail in case of a long position and the term β from formula (4.32) that at the right tail in case of a short position.69 The consideration of both tails of the CDF particularly enables the unproblematic use of asymmetric and therefore skewed distributions. In a final step, the determination of the conversion factors rα,i,j requires the multiplication by the importance sampling factor ζunits. The later chapter 4.4 addressing the out-of-sample backtesting provides a detailed description of importance sampling issues. After converting their individual VAR limits into investment positions by (4.33)

ci , j =

vl j rα ,i , j

the traders’ profits and losses simply result from (4.34)

pli , j = ci , j ri , j ζ bank .

Therefore, the computation of the profits and losses requires another importance sampling factor ζbank which describes chapter 4.4 in detail. Besides the simulated empirical CDFs the central planner also provides the traders with the importance sampling factors. 4.3.3 Simulation of the heterogeneous prospects of success of the business units For the distribution of the private information p within the model world, the model uses a particular beta distribution based probability density function (PDF).70 This PDF refers to the interval [0.5, 1] and uses the shape parameters α = 1 and β = 9. The insertion into the general beta PDF yields 69

70

Variable m determines the sample size of the simulated returns. The additional term 1/m makes sure that the respective return lies just within the confidence level since the mere use of the α-quantile would consult the next smaller one outside of it. To be precise, actually the middle of these two returns had to be chosen. This last consideration also applies to the case of a short position. However, the current model disregards this fact for reasons of simplification. See e.g. Johnson, Kotz and Balakrishnan (1995), pp. 210-211 for a description of the generalized beta density function.

79 (4.35)

f ( p ) = 4608(1 − p) 8 .71

18 16 14 12 10 8 6 4 2 0 0.5

0.6

0.7

0.8

0.9

1.0

Figure 4.2: Distribution of the probabilities of success p on the basis of a beta PDF with α = 1, β = 9 and on the interval [0.5, 1] The characteristics of the distribution of skill levels are (4.36)

p ~ B(1, 9) where [0.5, 1], µ = 0.55, σ = 0.045 and xmod = 0.5.

Probabilities of success below 0.5 remain disregarded since this would imply traders taking wrong positions deliberately. Within the basic model the worst imaginable trader represents an uninformed one provided with p = 0.5 just taking positions randomly. The exact shape of the distribution is less important as long as the distribution meets the demand of being fundamentally plausible and therefore exhibits an average skill level close to p = 0.5. Each decision maker j of the model bank can now be determined by a draw pj ~ B(1, 9) from the interval [0.5, 1] on the basis of the cumulative beta distribution function (4.37)

F ( p ) = 1 − (2 − 2 p j ) 9 .72

To be precise, the generation of random beta distributed probabilities of success requires the inverse function according to 1

(4.38)

p j = 1 − 0.5(1 − U j ) 9 where j = 1, ..., n and U j ~ U(0, 1) .73

Appendix 5 lists the exemplary outcomes of 200 draws representing the actual skill levels pj of n = 200 traders.

71 72

73

See Appendix 2 for the transformation. See Johnson, Kotz and Balakrishnan (1995), p. 211 for a description of the cumulative beta distribution function. Appendix 3 describes the customization of the function for the model’s purposes. See Appendix 4 for the transformation.

80

4.4 Out-of-sample backtesting and the role of importance sampling The out-of-sample backtesting verifies whether the generated VAR limit allocations indeed satisfy the constraints and meet the demand of being universal. Thereto, the model replaces the so far used trading scenarios ξ 1 , ..., ξ i , ..., ξ m by drawing new ones for the out-of-sample backtesting corresponding to ξˆ 1 , ..., ξˆ i , ..., ξˆ m . Hence, the expression “out-of-sample” here denotes the situation of using a different sample compared to the VAR limits’ optimization. The expression “outof-sample” in the current model does not mean testing optimal solutions on the basis of real data. The model’s backtesting uses the same assumptions concerning the stock returns’ distributions and correlations and also concerning the traders’ skill levels as already applied during the optimization. The backtesting just uses a different random number sequence yielding different scenarios. During the following, the use of the term “in-sample” consequently indicates any processes previous to the out-of-sample backtesting. A proper out-of-sample performance of the current model’s VAR limit allocations, however, requires the application of importance sampling (IS).74 IS represents techniques to keep the number of the required MC simulation iterations on a manageable level. Therefore, the use of IS is exclusively necessary for technical reasons and has no further meaning for the model itself. The model’s most important IS factor (4.39)

ζbank = 1.11

refers to the profit and loss simulation of the whole model bank during the insample computations. Characteristically for the model bank’s profit and loss distribution are very rare high losses and profits lying far in the tails of the distribution. In the present context of risk assessment, however, the focus is on the high loss events. The particular rareness of these high losses is due to the profit and loss distribution resulting from the interaction of two simulations: the simulation of the stock returns and the simulation of the trading behavior of the autonomous decision makers. Hence, the high loss events only occur in situations where the returns and the position taking by the traders are both very disadvantageous. Experiencing both circumstances at once is much rarer than experiencing only one of them.

74

See Srinivasan (2002) pp. 10-16 for a description of IS in the form of scaling representing the IS method used in context with the current model.

81 As a consequence, m = 20,000 MC simulations during the in-sample computations implement an insufficiently low frequency of the high loss events. For m = 20,000 MC, the satisfaction of the risk constraints during the out-of-sample backtesting cannot be guaranteed. Therefore, the model uses IS in the form of scaling representing the multiplication of the model bank’s in-sample profit and loss distribution by a scaling factor ζbank > 1. The following IS example gives an impression of the scaling’s impact. 80

600

70 500 60 400 50 40

300

30 200 20 100 10 0 -13.4

0 -9.5

-5.6

-1.6

2.3

6.2

10.2

14.1

-13.4

-9.5

-5.6

Figure 4.3: Exemplary illustration of importance sampling (IS) in the form of scaling on the basis of a histogram (black with, dashed without IS) The scaling causes the reallocation of the probability mass from the center of the histogram in Figure 4.3 to its tails making the rare events more frequent. The drawn through line indicates the distribution under use of IS, while the dashed line represents the original distribution without IS. For the identification of the exactly required scaling factor the current model applies a trial and error search process. During this search process the model assumes a standard skill level among the decision makers according to the lowest possible probability of success of the model of p = 0.5. This procedure aims at identifying a factor guaranteeing the model bank to satisfy its total VAR limit and applied confidence level respectively for a large range of different potential model bank configurations. In order to calculate the business units’ maximum investment positions from the units’ VAR limits the model uses another IS scaling factor (4.40)

ζunits = 1.025.

Basically, the model converts the VAR limits properly into investment positions by the conversion factors rα,i,j. However, with m = 20,000 MC simulation iterations, the positions regularly cause slightly more violations of the business units’ individual VAR limits during the backtesting than actually allowed. The determination of backtesting-proof factors would require higher MC simulation rates concerning the stock returns. An alternative to the increase of the simulation rate

82 again represents the use of IS in the form of scaling. Remember that the conversion factors rα,i,j mainly consist of a particular return picked from the inverse functions Fsim−1 r, j of the empirical CDFs of the simulated stock returns. For the implementation of IS the picked returns are then multiplied by a scaling factor > 1 also stemming from a trial and error search process. The IS factors’ are both linked by a certain interdependence. Factor ζunits influences the volume of the traders’ investment positions in the form of a slight reduction of these positions. In consequence, this factor already causes a slight reduction of the model bank’s losses and therefore a slightly improved satisfaction of the bank’s VAR constraint during the backtesting. Factor ζbank can therefore be chosen correspondingly lower. Hence, determining both factors efficiently requires determining factor ζunits previously to factor ζbank. In contrast, the reverse order of determination yields an inefficiently high factor ζbank. Factor ζunits, however, remains unaffected by the exact order anyway. Besides their different purposes and values, a further important difference between the two IS factors concerns their use during the in- and out-of-sample computations. While the model uses factor ζbank exclusively during the in-sample computations it uses factor ζunits during the in- and out-of-sample computations. This is due to the need for ζunits whenever the model performs VAR limit conversions independent from whether the conversions takes place during the in- or out-of-sample computations.

83

5

HEURISTIC OPTIMIZATION OF RISK LIMIT SYSTEMS BY THRESHOLD ACCEPTING

5.1 Visual proof of non-convexity by an exemplary model case The previous chapter exclusively focusses on the valuation of the objective function during the in- and out-of-sample case. In contrast, the current chapter addresses the actual process of optimization. Thereto, in a first step, the following examines the model’s optimization problem in order to categorize it among the different classes of optimization problems. Subsequently, the current chapter provides a visual proof confirming the underlying optimization problem of the model being non-convex. The close relation of the present model to the common case of portfolio optimization at first suggests the current optimization problem to correspond to quadratic programming (QP) or at least to a convex problem.75 However, the model’s integer constraints stemming from the use of a downside risk measure in the form of VAR cause the problem belonging into the category of mixed integer non-linear programs (MINLP). Nevertheless, this categorization still does not finally answer the question concerning the actual hardness of the problem. The relevant literature additionally distinguishes between convex and non-convex MINLPs.76 While the literature characterizes convex MINLPs as difficult, it considers non-convex ones as the hardest problems of all. Meanwhile, the general case of convex programming represents a well explored field providing effective solving algorithms in the form of interior-point methods77.78 These methods solve smooth convex problems similarly reliable as linear problems can be solved. A problem can be considered smooth as long as the objective function can be differentiated constantly. Such problems can even include hundreds of variables and thousands of constraints while the solving can be done on a desktop computer. Also the case of smooth convex MINLPs bene75

76 77

78

See e.g. Fylstra (2005), pp.18-20 for a quick introduction to QP, convex and non-convex problems in context with finance. See e.g. Liberti et al. (2009), pp. 231-233 for an introduction to MINLP. See e.g. Fylstra (2005), p. 20 for a quick introduction to interior-point methods or Boyd and Vandenberghe (2009), pp. 561-620 for a detailed examination. See e.g. Boyd and Vandenberghe (2009), pp. 7-8 introducing convex optimization before comprehensively examining the field of convex optimization in depth.

84 fits from these recent achievements concerning interior-point methods. In contrast, non-convex problems fall into the category of global optimization and their complexity can grow exponentially with the number of the used decision variables and constraints. This finally holds true for smooth as for non-smooth problems of this category. As a result, the corresponding optimization runtimes commonly exceed those of convex problems by far.79 By additionally including integer constraints these problems also belong to the class of MINLPs which usually provides further increases in hardness and complexity. Hence, in order to provide an appropriate solving method, it is most important to clearly determine whether the present problem features convex or non-convex properties. Nevertheless, also the property of smoothness considerably influences the choice of the method. Due to the present model’s discreteness and the scenario based stochastic programming, the objective function cannot be differentiated which clearly identifies the present objective function as a non-smooth function. Consequently, the efficient and reliable interior-point methods or any other gradient based methods represent inappropriate approaches no matter whether the optimal allocation of economic capital features convexity or nonconvexity. Nevertheless, the fact that the objective function is non-smooth does still not answer the question whether it is a convex or non-convex one. The solving of non-smooth convex problems requires much simpler optimization heuristics in the form of greedy algorithms while non-smooth non-convex problems demand for more complex heuristics. In case of a convex problem, the objective function and the constraints are all convex or linear functions.80 Linear functions just represent a specific kind of a convex function. The current problem includes the integer constraints (4.7) and (4.8), the budget constraint (4.9) and the constraints on the limit sizes (4.10). None of these constraints necessarily turns the problem into a non-convex one. Consequently, within the present model particularly the objective function (4.4) potentially induces non-convexity. The current model’s use of the downside risk measure VAR in combination with the business units’ non-elliptical return distributions suggests the current optimization problem to be non-convex.81 Nonetheless, the model’s optimization problem should undergo a qualified test in the form of a visual proof.82

79 80 81

82

See e.g. Boyd and Vandenberghe (2009), p. 10 for an introduction to global optimization. See e.g. Fylstra (2005), pp. 19-20 for remarks on the identification of convex problems. While the model’s stock returns are elliptical the influences of informational processes and behavioral characteristics concerning the decision makers transform these returns into non-elliptical skewed distributed returns of the business units. See e.g. Gilli et al. (2006), p. 7 for another example of visually proving the non-convexity of optimization problems.

85 The visual proof reduces the number of decision variables and VAR limits respectively heavily to three limits. Two of these limits, vl1 and vl2, represent the x- and y-axis and build a 142x120-grid increasing their limit quantities stepwise by 1.57. Furthermore, the approach adjusts the quantity of the third limit vl3 until all three limits together induce the maximum bank’s expected profit µ bank(vl) displayed by the z-axis. In order to induce the maximum µ bank(vl) the third limit vl3 has to establish the respective limit allocation’s maximum use of cbank and ec.83 During the visual proof the business units exhibit the arbitrary skill levels of (5.1)

p1 ≈ 0.544, p2 ≈ 0.535 and p3 ≈ 0.555.

Note that the present example assumes the central planner to know the traders’ skill levels wherefore the Bayesian learning remains disregarded. The trading simulation uses stock returns from the GBM which in turn uses historical data from the S&P index for parameterization where the exact parameterization data is irrelevant. The model bank uses a daily amount of economic capital and VAR-limit respectively of (5.2)

ec = vlbank = 1k

representing one of three constraints. Furthermore, the bank assigns ec to the traders according to the VAR limit allocation (5.3)

vl = (vl1, vl2, vl3).

Each single limit vlj represents the VAR constraint of one business unit j where these limits at the same time represent the decision variables. The third constraint refers to the model bank’s available investment capital (5.4)

cbank = 150k.

Constraints concerning the allowed VAR limit sizes according to (4.10) remain disregarded. The general formulas (4.4) to (4.9) provide more detailed information on the implementation of the current model case.

83

Thereto, the visual proof uses Algorithm 5.3 from chapter 5.3. In context with the visual proof, however, the first line of the algorithm just computes vl3 via vl3 = ec - vl1 - vl2 instead of vl via formula (5.12) while the proof’s grid structure predetermines the limits vl1 and vl2. Consequently also the term vl = φ · vlinit in lines 8, 9, 20 and 21 changes into vl3 = φ · vl3init.

86

µ*bank(vl) = 55.4 where vl1 = 546.1, vl2 = 393.4 and vl3 = 615.9

55.6 55.3 55 54.7

µ bank(vl)

54.4 54.1 53.8 53.5 530.4

55.3-55.6 55-55.3 54.7-55

467.6

vl1 404.8

391.9

54.4-54.7

vl2

54.1-54.4

329.1

53.8-54.1 342.0

266.3

53.5-53.8

Figure 5.1: Extract from an exemplary solution space surface Under these conditions assigning (5.5)

vl1 ≈ 546.1, vl2 ≈ 393.4 and vl3 ≈ 615.9

to the units induces the bank’s maximum expected profit of (5.6)

µ*bank(vl) ≈ 55.4.

The surface of the resulting solution space exhibits many local extreme points. For finer grid structures, the consistency of the surface will change slightly but basically keep its many peaks and valleys. In this small example, the differences between the peaks concerning their values for µ bank appear rather weak. With increased numbers of decision variables, however, these differences can be expected to become more distinct. The diagram from Figure 5.1 clearly proofs the non-convexity of the current optimization problem and consequently confirms the demand for global optimization means for an appropriate solving of the problem. This information is deciding for the choice of the solving algorithm. So far, there only was the knowledge that traditional gradient based methods drop out because of the objective functions non-smoothness. Consequently, a heuris-

87 tic optimization method has to be applied.84 The visual proof reveals, however, that the most simple heuristic optimization method in the form of a greedy search orientating by the steepest ascent of the objective value will be insufficient in order to approach the very best solution. For global optimization a single greedy search process will inevitably get stuck in one of the countless local extreme points of the solution space surface while the actual global maximum will very likely remain unidentified. In order to overcome this lack the search algorithm has to be modified to what the literature calls a local search algorithm.85 On the one side the algorithm should therefore provide the capability of escaping from local extreme points in order to move on approaching the actual global maximum. On the other side, it should feature certain stochastic restart strategies enabling an appropriate coverage of the solution space and increasing the probability of finding high quality approximations of the global maximum. The subsequent chapters introduce a corresponding algorithm used for the solving of the current model’s underlying global optimization problem.

5.2 Basic algorithm of threshold accepting In order to solve the underlying global optimization problem, the current model uses the method of threshold accepting (TA).86 The TA method belongs to the class of local search methods and within this class it belongs to the trajectory methods. Chapter 3.6.2 provides an overview concerning the different categories of heuristic optimization methods. Furthermore, the chapter discusses advantages of trajectory methods compared to population based methods and vice versa on the basis of the corresponding recent literature. The current and following chapters confine themselves to the detailed description of the TA-algorithm and its implementation. Algorithm 5.1 describes the TA method on the basis of pseudo code.87

84

85

86 87

See e.g. Gilli and Schumann (2010), Gilli and Winker (2009) and Maringer (2005) for issues on heuristic optimization in finance in general. See Gilli and Winker (2009), pp. 87-88 for the applied categorization of heuristic optimization methods during the current work. The current implementation of TA orientates by the work of Gilli et al. (2006). The TA pseudo code follows the design and notation used by Gilli et al. (2006) and Gilli and Winkler (2009).

88

1: parameterize nrestarts, nrounds, nsteps, pτ and trinit 2: for i = 1 to nrestarts do 3: determine start solution vlc (algorithm 5.2 or 5.3) 4: for j = 1 to nrounds do 5: for k = 1 to nsteps do 6: generate vln ∈ N(vlc) (algorithm 5.5) 7: compute ∆ = µbank(vln) - µbank(vlc) 8: if ∆ > τj then vlc = vln 9: end for 10: end for 11: µbank, i = µbank(vlc) and vli = vlc 12: end for 13: vl* = vli with i | µbank, i = max {µbank, 1, … , µbank, n restarts}

Algorithm 5.1: Threshold accepting First of all, the proper use of the algorithm requires the setting of several parameters whose exact values are more or less problem dependent. Line 1 of Algorithm 5.1 lists all the parameters to be addressed. In this context, nrestarts denotes the number of times the algorithm restarts with different start solutions. Chapter 5.3 addresses issues on the determination of such start solutions in depth. Parameter nrounds decides how many different thresholds the algorithm applies during one single restart. Furthermore, nsteps determines the number of optimization steps during the use of one particular threshold and round respectively. As a result, one restart of the algorithm uses nrounds · nsteps = niters iterations altogether. The parameterization also includes the initialization values pτ and trinit concerning the threshold sequence τ and the sequence of the transfer values tr. In the current case of maximization, τ consists of a sequence of negative values representing the maximum allowed decreases of the objective function across the different rounds and iterations of the optimization process. The sequence of transfer values tr represents an extension of the referred to TA pseudo codes from literature. Vector tr contains the values which the algorithm transfers each optimization iteration from one randomly chosen decision variable to another. By this procedure the algorithm finally “walks” through the solution space identifying different potential solutions to the problem. Consequently, the number of transfer values is given by niters. In context with the present model of optimal economic capital allocation, the parameterization of these transfer values turned out important. Chapter 5.4.2 and chapter 6.2.3 address these issues in detail. The referred to applications of TA from literature do not provide particular implementations and parameterization procedures concerning such transfer values. The transfer values tr play an important role in context with the neighborhood function N(vlc) from line 6 of Algorithm 5.1. This function finally determines

89 the mentioned procedure of how the algorithm identifies a new feasible solution vln starting from the current solution vlc. If the difference between a new and a current solution (5.7)

∆ = µbank(vln) - µbank(vlc)

exceeds the actual threshold according to (5.8)

∆ > τj

the new solution replaces the current one according to (5.9)

vlc = vln

representing the starting point for the subsequent optimization iteration displayed by line 8 of Algorithm 5.1. TA produces as many alternative solutions to a problem as the algorithm executes restarts. At the end of each restart the TA stores the respective solution value µbank, i and the respective values of the decision variables vli denoted by line 11 in the form of (5.10)

µbank, i = µbank(vlc) and vli = vlc.

After the execution of the last restart, line 13 describes the algorithm identifying the overall solution vl* by the solution value µbank, i representing the maximum value among all the stored values according to (5.11)

vl* = vli with i | µbank, i = max{µbank, 1, … , µbank, n restarts}.

After this fundamental description of the TA-algorithm, the subsequent chapters address particular parts of the algorithm in detail. In a first step, the next chapter addresses the current model’s determination of start solutions for each of the algorithm’s restarts.

5.3 Determination of start solutions The current model ensures the proper implementation of the optimization procedures by at first calibrating and parameterizing the TA-algorithm on the basis of rather basic random start solutions. If the algorithm optimizes successfully under these conditions later improvements on the start solutions in context with particular model cases potentially yield further increases in the solution quality of the TA-algorithm. The following describes the generation of the basic random start solutions in two steps. The first step confines itself describing the generation of feasible random start solutions on the basis of Algorithm 5.2 while the second step considers the extension of this procedure by additionally establishing maximum resource usage on the basis of Algorithm 5.3.

90 The basic determination method does not follow a particular strategy. Instead, this method generates random start solutions. This ensures achieving an appropriate coverage of the solution space by searching on the basis of a certain number of restarts. The basic method therefore determines the start solution vl for the case of n business units according to (5.12)

vl = (vl1 , ... , vl j , ..., vl n ) where vl j = ec ⋅

uj

and u j ~ U[0.1] .

n

∑u

j

j =1

The formula simply uses the available economic capital ec of the model bank in combination with uniform random numbers uj in order to define the decision variables vlj which at the same time represent the business units’ VAR limits. Despite the conservative approach of exclusively using ec without considering any beneficial diversification effects the resulting start solutions can nevertheless violate the constraints of the model. Therefore, the starting solution vl undergoes a check for potential constraint violations by the help of Algorithm 5.2 in order to exclude unfeasible solutions. 1: compute vl (formula 5.12) 2: while 1 = 1 do 3: for i = 1 to m do 4: compute cbank,i 5: end for 6: if max cbank,i ≤ cbank then leave loop 7: else if max cbank,i > cbank then 0.5 · vl 8: end while 9: while 1 = 1 do 10: count = 0 11: for i = 1 to m do 12: compute plbank,i 13: if plbank,i < -vlbank then count + 1 14: end for 15: if count ≤ m · α then leave loop 16: else if count > m · α then 0.5 · vl 17: end while

Algorithm 5.2: Computation of random and feasible starting solution vl Lines 2 and 9 initialize infinite while-loops by the equation 1 = 1. The first while-loop takes care of whether the solution vl satisfies the budget constraint (4.9). Thereto, line 3 computes the m simulation iterations’ actually invested capital cbank, i under the current start solution vl. If the maximum occurring value for cbank, i does not exceed the bank’s actually available investment capital cbank

91 the algorithm leaves the first while-loop by line 6 and moves on to the second while-loop. Otherwise, the algorithm halves the decision variables by multiplying each VAR limit in vector vl by the factor 0.5 according to line 7. This rather rough adjustment aims at establishing the feasibility of the start solution at minimum effort. The while-loop keeps reducing vl until the start solutions’ feasibility arrives. Subsequently, the second while-loop checks and establishes the satisfaction of the VAR constraint (4.7). Thereto, line 12 computes the m simulation iterations’ profits and losses plbank, i of the model bank. Line 13 counts the number of losses falling below the bank’s total VAR limit vlbank by the variable count. If count stays below the feasible number of violations resulting from the multiplication of the number of simulation iterations with the current confidence level α, the algorithm leaves the second while-loop via line 15 and ends. Otherwise, similar to the budget constraint, the loop reduces vl according to line 16 until the solution also satisfies the VAR constraint. Algorithm 5.2 does not provide an explicit monitoring concerning constraint (4.8), the economic capital limits of the single business units. This is due to the use of the conversion factors according to the terms (4.31) and (4.32) which implicitly establish the respective constraint’s satisfaction by default. Since Algorithm 5.2 exclusively uses adjustments in the form of reductions, its mechanism sooner or later inevitably induces the respective start solution’s satisfaction of the model’s constraints. The exact order of the two while-loops is actually irrelevant. So far, the random start solutions exclusively provide the property of being feasible. However, a considerable improvement of the start solutions would represent their maximum usage of the available resources. Therefore, the following extension of Algorithm 5.2 additionally scales the random start solution in order to establish its maximum use of the model banks’ available investment capital and economic capital.

92 1: compute vl (formula 5.12) 2: initialize vlinit = vl, φ = 1, φmin = 0 and φmax = 2, iteration = 0 3: while 1 = 1 do 4: iteration + 1 5: for i = 1 to m do 6: compute cbank,i 7: end for 8: if max cbank,i < cbank then increase φ (algo. 5.4), vl = φ · vlinit 9: else if max cbank,i > cbank then decrease φ (algo. 5.4), vl = φ · vlinit 10: else if max cbank,i corresponds to cbank then leave loop 11: end while 12: iteration = 0, token = 1 13: while 1 = 1 do 14: iteration + 1, count = 0 15: for i = 1 to m do 16: compute plbank,i 17: if plbank,i < -vlbank then count + 1 18: end for 19: if token corresponds to 1 and count ≤ m · α then leave loop else token = 0 20: if count < m · α then increase φ (algo. 5.4), vl = φ · vlinit 21: else if count > m · α then decrease φ (algo. 5.4), vl = φ · vlinit 22: else if count corresponds to m · α then leave loop 23: end while

Algorithm 5.3: Computation of a random starting solution inducing maximum use of the available investment capital cbank and economic capital ec Key variable in context with Algorithm 5.3 represents the factor φ responsible for the scaling of vl while φmin and φmax determine the factor’s lower and upper bound. Depending on the interim results, the algorithm adjusts φ according to the binary search Algorithm 5.4 from below.88 Thereby, after a certain number of iterations, both algorithms together establish a start solution featuring the maximum use of the available investment capital cbank and economic capital ec. The description of Algorithm 5.2 already introduces the fundamental structure of the determination of start solutions. Therefore, the following description of Algorithm 5.3 exclusively concentrates on crucial points which are less selfexplanatory. An important point of Algorithm 5.3 line 2 represents the initialization of vlinit representing the initial limit allocation stemming from formula (5.12). During the further computations vlinit constantly serves as a starting point in order to create a new version of vl according to (5.13) 88

vl = φ · vlinit.

See e.g. Knuth (1997), pp. 409-426 for further information on binary search algorithms.

93 The first while-loop of Algorithm 5.3 addresses the maximum usage of the available investment capital according to max cbank,i = cbank which at the same time represents the loop’s exit in line 10. Before reaching this state, line 8 and nine adjust the scaling factor φ on the basis of the binary search Algorithm 5.4. 1: if increase φ then φmin = φ else φmax = φ 2: if iteration > 100 and increase φ then φmax + 1, iteration = 0 3: if iteration > 100 and decrease φ then φmin = 0, iteration = 0 4: φ = φmin + 0.5(φmax - φmin)

Algorithm 5.4: Binary search algorithm for adjusting the scaling factor φ Depending on whether the decision variables vl require an increase or a decrease in order to further approach satisfying max cbank,i = cbank, the algorithm shifts the lower or upper bound of factor φ in the form of φmin = φ or φmax = φ according to line 1. The shift causes line 4 to determine a different factor φ for the next iteration of the first while-loop in Algorithm 5.3. More illustrative, the initial search interval for the right factor φ of [0, 2] shrinks to either [1, 2] or [0, 1] while factor φ itself becomes the middle of the respective new interval and therefore φ = 1.5 or φ = 0.5. If 100 repetitions / iterations cannot establish max cbank,i corresponding to cbank either the upper bound φmax is too low wherefore in this case line 2 increases the current φmax by one, or the lower bound φmin is too high wherefore then line 3 sets φmin back to zero. Such fundamental adjustments become necessary if, for example, φmax = 2 represents an insufficiently high upper bound from the start. Another example represents the case of φmin taking too high values during the first while-loop preventing sufficiently strong reductions in context with the VAR constraint during the second while-loop. If necessary, Algorithm 5.4 repeats such fundamental adjustments of the search interval until the respective desired states arrive. A very important variable during the second while-loop in Algorithm 5.3 represents the variable token. As long as token corresponds to 1 the current VAR limit allocation vl features maximum use of the available investment capital. Further increases of vl would violate the budget constraint while decreases would inevitably cause the invested capital to fall below its feasible maximum. Consequently, according to line 19, Algorithm 5.3 ends in this state the satisfaction of the VAR constraint provided. If Algorithm 5.3 does not end during token = 1 the current vl obviously causes VAR constraint violations. In this case, the algorithm continues scaling vl according to the lines 20 and twenty-one and finally ends via line 22. However, instead of newly initializing φ, φmin and φmax the algorithm urgently needs to continue on the basis of the current values. Otherwise the scaling in context with the

94 VAR constraint might create a vl provoking an exceeding of the available amount of investment capital.

5.4 Neighborhood function 5.4.1 Basic design of the neighborhood function The neighborhood function determines how the TA-algorithm “walks” through the solution space. A key factor in this context represents the step size of the algorithm. Disadvantageous step sizes can heavily reduce the chance of finding very well approximations of the actually best solution from the start. Far too small as well as far too large step sizes require distinctly increased numbers of restarts in order to be able to hold a certain quality level of the generated solutions. In case of too small step sizes, the single restarts search very precise. However, the probability of completely missing promising parts of the solution space rises significantly. In contrast, too large step sizes often make the algorithm just pass promising areas and miss investigating these areas in depth. The identification of the optimal step size can be challenging. Therefore, certain fundamental rules can help as guidelines for the configuration of the neighborhood function: The determination of a neighborhood function should basically follow the understanding of the term “neighborhood” in the sense that a neighbor solution should be closer to the current solution than randomly chosen solutions.89 Furthermore, particular technical computational requirements implicitly define the term “neighborhood”. Implementations of local search methods like the TA-algorithm urgently demand for a strictly local updating of the objective function for efficiency reasons. Therefore, only the parts of a new solution which exhibit modifications should cause updating computations instead computing the objective function from scratch.90 However, the local updating of the objective function only causes efficiency gains for exclusively local modifications of the current solution. In the present case of VAR-limit optimization there is only one natural way of modifying a set of limits as local as possible:91 Draw one limit randomly and transfer a part of it to another one, also randomly drawn from the remaining limits. This includes only two partial adjustments of the objective function saving much computational costs compared to the complete computation of the function. The demand for a strictly local modification restricts the possibilities of influencing the TA89 90 91

See Gilli and Winker (2009), p. 104 for a theoretical neighborhood definition. See Gilli and Winker (2009), p. 103. See Gilli et al. (2006), pp. 8-9 or Gilli and Schumann (2010), p. 21 for examples in context with portfolio optimization.

95 algorithm’s movement through the solution space to the size of the value transferred between the two randomly selected limits. The following outlines the local modification of the current solution by the neighborhood function. = transfer value tr

= infinite limit vlinf

vlmax

… vlmin

Figure 5.2: Outline of the neighborhood function N(vlc) Each of the black bars represents one decision variable and VAR-limit respectively where the whitened parts illustrate the transfer value transferred between two randomly selected limits. Furthermore, the dotted bar stands for the infinite limit representing an infinite source and storage for the resource “limit”. The infinite limit does not influence the objective function and therefore does not represent a decision variable. Nevertheless, whenever the modification of the current solution takes place in order to find a neighbour solution, the limits to choose from also include the infinite limit. The two arrows indicate the two possible kinds of modifications, with and without the infinite limit involved. As a consequence, the neighborhood function has the opportunity to increase or decrease the total amount of the used resource “limit”. The used amount of the resource “limit” successively converges towards its feasible maximum across all optimization iterations. The infinite limit represents a specific feature of the case of optimal economic capital allocation. It corresponds to the cash position used in context with the case of portfolio optimization.92 However, the finite cash position represents a common budget constraint. In contrast, with economic capital allocation, the amount of the resource “limit” allowed to be allocated is unknown until the allocation ends. This is due to diversification effects among the business units allowing the total sum of the limits (exclusive the infinite limit) to exceed the actually available amount of economic capital provided by the bank. The exact extent of the exceeding in turn depends on the finally applied limit allocation.

92

See e.g. Gilli and Schumann (2010), p. 21 mentioning this cash position in context with TA for portfolio optimization.

96 Before a closer examination of the exact determination of the transfer value and the constraints’ satisfaction the following gives an overview concerning all parts and processes of the neighborhood function. 1: randomly choose vldec and vlinc where vldec ≠ vlinc 2: generate tri where i = 1, …, niters (algorithm 5.6) 3: execute transfer 4: monitor constraints’ satisfaction (algorithm 5.7) 5: if constraint violation then µbank(vl) - cbank

Algorithm 5.5: Implementation of the neighborhood function N(vlc) Within Algorithm 5.5 vldec denotes the decreasing limit and vlinc the increasing limit. After drawing the limits vlinc and vlinf the algorithm transfers the value tri between them. The resulting limit allocation undergoes a monitoring concerning the constraints’ satisfaction. If the monitoring reveals any constraint violations, the algorithm prevents the respective solution’s acceptance during line 5 of Algorithm 5.5 through an extreme reduction of the solution’s objective value µbank(vl) by the bank’s available investment capital cbank. This intentionally overcautious reduction disqualifies the respective solution in any case for further use. As a consequence, the selected implementation of the neighborhood function exclusively accepts feasible solutions. Alternatively, neighborhood functions can also to a certain extent accept constraint violating solutions by the use of penalty terms. These penalty terms reduce the objective function only to a certain extent in order to penalize constraint violations.93 The penalties increase towards the end of the search process establishing feasible final solutions. The use of penalty terms in particular makes sense in cases of discontinuous solution spaces where feasible solutions are much harder to find causing extensive computational costs. The use of penalty terms can help anticipating such difficulties leading to much more efficient implementations. The present model, however, faces continuous solution spaces. Therefore, potential penalty strategies are irrelevant. Furthermore, using penalty terms involves further parameterization efforts wherefore using penalty terms could even be disadvantageous. 5.4.2 Generation of the transfer value The transfer value represents a key adjusting screw in order to influence the movement of the TA-algorithm through the solution space. The literature on heuristic portfolio optimization provides no particular method concerning the determination of the transfer value. This confirms a recent investigation on 93

See e.g. Gilli and Schumann (2010), p. 22, Gilli and Winker (2009), pp. 103-104 and Gilli et al. (2006), p. 8 for remarks on the use of penalty terms.

97 “Heuristic optimization in financial modeling” by Gilli and Schumann (2010). They describe the transfer value as a small quantity, e.g. 0.2 %, of a portfolio asset to be sold and reinvested into another portfolio asset. Furthermore, according to their experiences, at least for practical purposes, this quantity can either be fix or random. However, they do not provide any statistical prove for their findings. Surely, the efficient transfer value depends on the underlying model. Gilli and Schumann (2010) address portfolio optimization specifically focusing on the use of realistic integer constraints and therefore e.g. abolish the simplifying assumption of infinitely divisible stock portions. This feature and the model’s general focus on realistic integer constraints are likely to cause their model’s robustness concerning the finally chosen implementation of the transfer value. In contrast, the present model of optimal economic capital allocation, however, assumes an infinite divisibility of VAR-limits and stock portions as well. Tests on the basis of the present model clearly suggest the relevance of the applied transfer value. Basically the consistency of the model concerning the infinite limit vlinf serving as a constant storage and source for the resource “limit” rather suggests an absolute instead of a percentage transfer value. For the specific cases where the decreasing limit vldec represents at the same time the infinite limit vlinf, the transfer value cannot be properly determined by a percentage value because of the infinity of vldec. Therefore, the model determines an absolute transfer value from the start. During the optimization process, the current TA implementation computes a sequence of transfer values tr preparing one transfer value tri for each optimization iteration i = 1, …, niters by the formula  i   tri = 0.01 + u (trinit − 0.01)1 − n iters  

(5.14)

exp

where u ~ U[0.1]

The following figure illustrates a corresponding sequence of transfer values tr = (tri, …, trn ). iters

Figure 5.3: Exemplary sequence of transfer values tr for exp = 2

98 The sequence decreases exponentially while additionally featuring many smaller transfer values deviating from the basic exponential decrease. The formula’s term trinit initializes the sequence, the exponential term (1 - i/niters)exp induces the sequence’s convex shape and the uniform random number u causes the visible deviations across the sequence. The value 0.01 represents the minimum transfer value of one cent keeping the formula from producing inappropriately infinitesimal transfer values during the final iterations. Infinitesimal transfer values cause only infinitesimal and irrelevant modifications of the solution. The current model constantly uses exp = 2 arising from trial and error. Decreasing the transfer values across the optimization iterations appears obvious: At the beginning of the optimization process, the algorithm benefits from larger step sizes supporting the algorithm’s quick arrival at the surface of the solution space. During the first search period, larger step sizes also support an easy overcoming of misleading local extreme points. Under use of lager step sizes, the algorithm opts for a particular area of the solution space likely to contain the global optimum. With increasing closeness to the solution space surface, however, smaller step sizes improve the algorithm’s search behavior with respect to the fine branched structure of the surface. The decrease of the step sizes enables a more and more precise search properly investigating the respective area and “crawling” into its peaks. The present model uses the value of the available amount of economic capital ec in order to determine the maximum interval for potential values for trinit in the form of (5.15)

0 ≤ trinit ≤ ec.

Obviously, formula (5.14) produces huge transfer values during the first optimization iterations for initialization values trinit close to ec and uniform random numbers u close to one. In such cases, the transfer value more or less corresponds to the complete ec of the respective bank. Nevertheless, diversification effects might enable the feasibility of such huge transfers. The consideration of even larger transfer values and therefore trinit > ec, however, appears exaggerated. As a consequence, ec represents an appropriate upper bound of the interval for potential values for trinit. The feasibility of the limit transfers also depends on the consistency of the involved limits. A case where even huge transfer values of the size tri = ec can in principle be transferred unproblematic represents the case where vldec at the same time represents vlinf. In all other cases vldec is often too small to allow for huge transfer values. Furthermore, potential constraints on the limit sizes according to formula (4.10) hinder particular transfers. In order to prevent frequent interruptions from limits not fitting the current transfer value Algorithm 5.6 ad-

99 justs the respective transfer values with regard to the actual sizes of the involved limits vldec and vlinc. 1: compute tri (formula 5.14) 2: if vlinf represents vldec then 3: if vlinc + tri > vlmax then re-compute tri = vlmax - vlinc 4: else if vlinf represents vlinc then 5: if vldec - vlmin < tri then re-compute tri = vldec - vlmin 6: else 7: if vldec - vlmin < tri then re-compute tri = vldec - vlmin 8: if vlinc + tri > vlmax then re-compute tri = vlmax - vlinc

Algorithm 5.6: Final determination of the transfer value tri The lines 2, 4 and 6 check on whether the infinite limit vlinf represents vlinc, vldec or none of both. Depending on the respective case, the algorithm monitors tri for violations of the limit sizes and re-computes tri if necessary. The ability of algorithm Algorithm 5.6 to adjust the transfer values enables a convenient identification of the most appropriate initialization value trinit without having to fear the respective computations’ interruption in case of inappropriate trinit value. Chapter 6.2 on parameterization issues addresses the exact determination of trinit which also depends on the used threshold sequence. Furthermore, chapter 6.2.3 compares the current generation of the transfer value with alternative methods: One alternative method dispenses with the exponential decrease while the other even uses just fix transfer values additionally dispensing with the randomization of the transfer value. 5.4.3 Monitoring of the constraints’ satisfaction The current chapter addresses the neighborhood function’s monitoring mechanisms concerning the satisfaction of the constraints. The constraints to be monitored represent the VAR limit of the model bank vlbank, the VAR limits of the business units vlj, the model bank’s total investment budget cbank and the restrictions vlmin ,vlmax on the business units’ VAR limit sizes. A detailed description of the constraints gives the determination of the stochastic program from chapter 4.2. There are constraints which the neighborhood function monitors in an implicit manner and there are also constraints of which the function takes care explicitly. The implicit monitoring concerns the VAR limits of the business units vlj and also the restrictions on the sizes of these limits by vlmin and vlmax. In case of the business units’ VAR limits vlj the use of the conversion factor rα from the formulas (4.31) and (4.32) converts the limits into feasible investment positions guarantying their satisfaction of the limits. Whenever the neighbor-

100 hood function identifies a neighborhood solution, it uses these formulas in order to determine the resulting investment positions of the modified limits vldec and vlinc. For further information on the conversion factor rα see chapter 4.3.2. The implicit monitoring of the constraints on the VAR limit sizes is performed during the determination of the transfer values tr according to Algorithm 5.6. At first, the algorithm prepares a transfer value tri which then runs through particular adjustments. The adjustments take care of the transfer value tri fitting the sizes of the two randomly selected VAR limits vldec and vlinc as well as the restrictions by vlmin and vlmax. In contrast to the implicit monitoring, the neighborhood function addresses the constraints concerning the bank’s total VAR limit and its investment budget explicitly by an own algorithm described below. 1: for i = 1 to m do 2: compute cbank,i by local updating 3: if cbank,i > cbank then leave algorithm 4: end for 5: count = 0 6: for i = 1 to m do 7: compute plbank,i by local updating 8: if plbank,i < -vlbank then count + 1 9: if count > m · α then leave algorithm 10: end for

Algorithm 5.7: Monitoring of the budget constraint and the VAR limit of the bank The first for-loop checks each of the m simulated compositions of the bank-wide investment positions of whether the respective actually invested capital cbank,i exceeds the bank’s investment budget cbank. Note that the computation of cbank,i also uses local updating similar to the local updating of the objective function in order to save runtime. In case of any exceeding, the current for-loop and the whole algorithm break off and Algorithm 5.5 takes over rejecting the current VAR limit allocation. In contrast, if the neighborhood solution satisfies the budget constraint, the second for-loop starts in order to additionally monitor the satisfaction of the bank’s total VAR limit. Thereto, the algorithm computes the profit and loss of the bank plbank,i for each simulation iteration m by local updating again. The applied confidence level α allows for m · α losses plbank,i falling below the bank’s total VAR limit -vlbank. Note that the algorithm’s variable count here represents the pseudocode counterpart to the binary variable bbank,i from formula (4.5). As soon as the number of violations of the bank’s VAR limit exceeds the feasible amount, the

101 current for-loop and the algorithm break off. Then Algorithm 5.5 takes over again identically to the case of the violation of the budget constraint from above. If the number of violations of the bank’s VAR limit does not exceed the feasible amount the algorithm ends regularly while the subsequent Algorithm 5.5 accepts the current VAR limit allocation as a starting point for the next optimization iteration. One key aspect concerning an effective implementation of the current constraint monitoring and finally concerning a successful implementation of the TAalgorithm at all consists in the comprehensive use of local updating. Otherwise, the algorithm will not be capable of executing sufficiently high numbers of optimization iterations while still causing moderate runtimes.

5.5 Generation of the threshold sequence The threshold values represent, besides the transfer values, another key element of the TA-algorithm. Their exact configuration is highly responsible for the convergence of TA towards the actual global optimum of a solution space. Recent relevant literature on TA provides a save method in order to create a threshold sequence featuring well convergence properties.94 Thereto, the determination of the threshold sequence uses the data underlying the respective optimization situation in order to create threshold values which exactly fit the situation’s individual needs. The data driven determination of the threshold sequence starts by a random walk through the solution space on the basis of the neighborhood function N(vlc) as described below.95 1: randomly choose vlc (algorithm 5.2) 2: for i = 1 to ndeltas do 3: compute vln ∈ N(vlc) (algorithm 5.5) 4: compute ∆i = | f (vlc) - f (vln)| 5: vlc = vln 6: end for 7: compute F of (∆1, …, ∆ndeltas) 7 8: compute τ (formula 5.16)

Algorithm 5.8: Generation of the threshold sequence τ

94

95

See Gilli and Schumann (2010), pp. 21-22, Gilli and Winker (2009), pp. 104-105 and Gilli et al. (2006), p. 9 for a description of the data driven determination of the threshold sequence used by the current TA approach. See Gilli and Winker (2009), p. 105.

102 For the random walk through the solution space the algorithm randomly chooses a limit allocation vlc according to the quick and simple Algorithm 5.2 where the subscript c stands for “current”. The subsequent for-loop randomly identifies neighborhood solutions vln ∈ N(vlc). Here the subscript n indicates the “new” solution. During this random walk Algorithm 5.8 computes the absolute deltas ∆i between the sequentially occurring values of the objective function according to (5.16)

∆i = | f (vlc) - f (vln)|.

After computing and collecting the required number of absolute deltas ndeltas, line 7 of the algorithm generates these deltas’ empirical distribution F. The final threshold sequence τ represent equidistant draws from that empirical distribution F by formula (5.17)

 n − r    . τ = (τ 1 ,...,τ r ,...,τ nrounds ) where τ r = F −1  p τ  rounds  n   rounds  

The term pτ represents the initialization value of the threshold sequence in the form of a quantile value. Furthermore, the number of rounds nrounds finally determines the number of thresholds used by the respective TA implementation. Formula (5.17) successively adjusts the thresholds across the rounds of optimization while the threshold during the last round is zero. The following diagram displays an exemplary empirical distribution inclusive the resulting threshold sequence. 1 2.36

0.8 1.19 0.69 0.6

0.43 0.26 0.15

0.4

0.08 0.03

0.2

0.01 0.00

0 0

1

2

3

Figure 5.4: Exemplary empirical distribution of deltas incl. threshold sequence τ

103 The threshold sequence of Figure 5.4 uses nrounds = 10 where each dot represents one threshold. Since the present TA addresses the case of maximization, the thresholds are finally used as negative values each of them representing the maximum worsening accepted during the respective round of optimization. Furthermore, the exemplary sequence uses an initialization value of pτ = 1 wherefore formula (5.17) draws the thresholds from the complete empirical distribution. The relevant literature, however, recommends the use of lower quantile values pτ for the actual optimization. Lower quantile values result in tightened thresholds enforcing a more directed and effective search process particularly during the first rounds of the optimization. Which quantile value is appropriate in detail also depends on trinit, the used initialization value in context with the transfer values tr. Consequently the proper parameterization of pτ and trinit requires their joint parameterization described in detail by chapter 6.1. Finally, the determination of the threshold sequence also depends on the number of absolute deltas ndeltas. The current implementation of TA uses (5.18)

ndeltas = niters = nrounds · nsteps.

If the generation of the threshold sequence uses particularly low values for ndeltas distinctly below niters in order to speed up the procedure, slightly different approaches to Algorithm 5.8 might be advantageous.96 For example, instead of a random walk every used solution could be chosen randomly according to the generation of the initial solution from line 1 of Algorithm 5.8. Another possibility represents the use of the results of many shorter walks through the solution space. The relevant literature does not provide clear recommendations for the case of low values for ndeltas yet. However, for large values of ndeltas or at least for ndeltas → ∞ the different approaches’ resulting empirical distributions all converge.

96

See Gilli and Winker (2009), p. 105 for a brief discussion concerning appropriate numbers of absolute deltas ndeltas and different methods for the generation of τ for the case of low values for ndeltas.

104

5.6 Parallelization of threshold accepting The structure of the TA-algorithm is highly appropriate for the use of parallel computing. The TA-algorithm naturally decomposes the required computation into many single smaller computations and restarts respectively. The computations of the restarts do not depend on each other or interact and furthermore are relatively equal in size. This results in a high level of granularity of the complete computational process enabling its proper allocation across several software servants running on a parallel computing cluster of many cpus. The following outlines the model’s used parallel computing structure.

Master

Servant 1

Servant 2

Servant 3

Servant 4



Figure 5.5: Applied structure of parallel computing The applied structure of parallel computing represents a classic master-servant structure. One job corresponds to one restart. The master can handle different optimizations at once. Therefore he has to assign an ID to each single restart providing information about the original optimization the restart belongs to. The ID enables a completely unrestricted allocation of all restarts originating from any optimization across the servants. After a servant has finished a job, it sends the result back to the master. The master stores the received data according the IDs and starts with the data’s selection, rearrangement and analysis as soon as the last data has arrived. For the communication of the master with the servants the model uses an implementation on the basis of the Message Passing Interface (MPI). The master communicates with the servants as follows.97

97

See e.g. a LAM/MPI-tutorial (2012) for an exemplary master and servant communication. The current model’s implementation of master and servant communication orientates by this example.

105 1: prepare njobs 2: initialize nservants where nservants ≤ njobs 3: for j = 1 to nservants do 4: send next job in row to servant j 5: end for 6: count = njobs - nservants 7: while count > 0 do 8: receive results from any servant j 9: send next job in row to idle servant j 10: count - 1 11: end while 12: while there are active servants do 13: receive results from any servant j 14: end while

Algorithm 5.9: Communication between the master and the servants First of all, the master prepares the jobs to be allocated among the servants indicated at the beginning of Algorithm 5.9. Furthermore, the algorithm determines the number of servants nservants where the maximum number of servants corresponds to the number of jobs according to njobs ≥ nservants. The first for-loop then provides each servant with a job. Subsequently, in cases where njobs > nservants, the algorithm allocates the remaining jobs by a while-loop among the finished servants until there are no jobs left and therefore count = 0. A final for-loop takes care of the remaining results and delivers them to the master. Therefore, Algorithm 5.9 features a certain form of dynamic load balancing since the master supplies the servants with further jobs of the remaining work as soon as they fall idle. The parallelization of TA does not require any inter-servant communication since the single jobs can be executed completely independent from each other. As a parallel computing platform the present model uses the Nehalem cluster of the High Performance Computing Center Stuttgart (HLRS). For implementation, the parallel computing parts of the model use, as already mentioned, IMPI while the remaining parts use C++.

107

6

PARAMETERIZATION OF THRESHOLD ACCEPTING

6.1 Concept of successive parameterization in context with the present model Finally, the parameterization of TA in context with the present model involves 6 important parameters. Each parameter influences the quality level of the generated solutions of TA. The quality level depends on precision and efficiency issues. The 6 parameters represent: the number of restarts nrestarts, rounds nrounds and steps nsteps as well as the two initialization values trinit and pτ of the sequences of the transfer values tr and the thresholds τ and finally the exponent exp determining the rate of decrease of the transfer value across the optimization iterations. Aiming at the highest possible quality level actually demands for determining the 6 parameters at once since they all depend on each other. Therefore, the parameterization finally represents an optimization problem by itself. The TAalgorithm, however, often just has to yield very well solutions but does not need to be able to approach the actual global solution with maximum precision up to the very last decimal place. This also applies for the present application of the TA-algorithm. As a consequence, the present model uses a simplified successive parameterization approach. Finally, most of the applications of TA from research and practice use similar parameterization procedures. However, even minor differences in the use and implementation of TA can already have tremendous effects on the required parameterization procedure. Therefore, similar to the description of the use and the implementation of TA, also the detailed description of the parameterization procedure is important for reasons of reconstruction and verification. Not least because the relevant literature does not provide one particular clear cut and widely established parameterization procedure which can be generally used by all kinds of TA applications. A first step of simplifying the parameterization process in context with the current use of TA represents the parameterization’s subdivision into two parts: One part addresses the parameterization of exp, nrounds, pτ and trinit particularly determining the search behavior of a single restart and its movement through the solution space. The other part considers the parameterization of nrestarts and nsteps particularly responsible for an appropriate coverage of the solution space. However, the present model reduces the parameterization efforts concerning exp and nrounds to a minimum. As already mentioned, the present model constantly uses exp = 2 resulting from trial and error. The number of rounds nrounds finally

108 determines the number of used thresholds. Similarly comprehensive TA approaches from the literature addressing portfolio optimization, for example, use nrounds = 10.98 Furthermore, the relevant literature identifies nrounds = 10 as an appropriate minimum value.99 Only for large numbers of niters, approximately from > 50k or even > 100k on, the adjustment of the number of rounds becomes increasingly necessary.100 The current approach orientates by the referred to literature and uses also nrounds = 10. The use of the default number of rounds enables focusing on the more relevant parameters like pτ, trinit, nrestarts and nsteps.

6.2 Effective combinations of thresholds and transfer values 6.2.1 Simple parameterization by visual analysis At first, the parameterization addresses trinit and pτ determining the search behavior of a single restart.101 A basic parameterization of trinit and pτ can already be performed by a minimum of four restarts. The parameterization uses each restart in order to test one of four distinctly different trinit-pτ combinations. Each combination induces a certain search behavior of the TA-algorithm. The subsequent visualization of the resulting search behaviors allows for the rating of the test combinations. In this phase of parameterization, the rating exclusively refers to the course of the TA search process while the restarts’ final results are irrelevant for the moment. The following outlines the identification of potential test combinations on the basis of a simple 2x2-grid.

98

See Gilli et al. (2006) describing different applications of TA using 10 rounds. See Gilli and Winker (2009), p. 106. 100 See e.g. Gilli et al. (2006), pp. 10 and 14 using nrounds = 10 in combination with niters = 50k and = 20k. 101 See Burghof and Müller (2013a) for a preliminary draft concerning the parameterization of the TA-algorithm in context with the present model also using a decreasing and randomized transfer value. 99

109



trinit Figure 6.1: Choice of potential trinit-pτ combinations by a 2x2-grid In a first step, the 2x2-grid divides the potential trinit-pτ combinations up into four fields. The intersections of the broken lines mark the fields’ midpoints denoting fundamentally different value combinations. The search can be refined by any other more detailed subdivision of the potential trinit-pτ combinations involving more restarts. The following describes a parameterization of trinit and pτ on the basis of a similar optimization problem as used for the illustration of the solution space surface from chapter 5.1. For the current parameterization only the number of decision variables increases from 3 to 50 providing a more realistically comprehensive problem. The exact GBM stock returns and probabilities of success pj of each business unit j = 1, …, 50 are again irrelevant. Furthermore, the parameterization chapters constantly use the simpler Algorithm 5.2 for the determination of the start solutions. This procedure ensures yielding a robust parameterization also inducing the TA-algorithm to achieve high quality results for less intelligent start solutions. Besides the preliminary parameterization of nrestarts = 4 the parameterization of trinit and pτ demands for a preliminary parameterization concerning nsteps. In order to exclude the use of completely inappropriate values for nsteps the preliminary value should orientate by the recommendations of the relevant literature. The spectrum of serious values for nsteps ranges from 100 to 10k and above, depending on the comprehensiveness of the respective problem and the demanded quality level of the solutions.102 The current model uses nsteps = 2k.103

102 103

See e.g. Gilli and Schumann (2011) investigating on the general performance of TA. See e.g. Gilli et al. (2006), p. 14 using nsteps = 2k for a similarly comprehensive optimization problem from the field of portfolio management.

110 Figure 6.2 visualizes the search behavior of the four marked trinit-pτ combinations from Figure 6.1. 155

0

155

0

1: trinit = 250 / pτ = 0.75

2: trinit = 750 / pτ = 0.75

150

-0.2

150

-0.4

145

-0.4

145

-0.8

-0.6 3291

140

0

155

140 1 155

-1.2 2843

1

0

3: trinit = 250 / pτ = 0.25

4: trinit = 750 / pτ = 0.25 -0.02 -0.01

150

150 -0.04

-0.02

145

145 -0.06

-0.03

140 1

542

-0.08

140 1

652

Figure 6.2: Visualization of the search behavior of trinit-pτ combinations on the basis of single restarts The black lines denote the accepted intermediate solutions during niters = 20k optimization iterations whereas the x-axes display the total number of these solutions. Furthermore, the broken lines represent the respective threshold sequences referring to the secondary y-axes on the right of the diagrams. Remember that pτ represents a quantile value and trinit a currency value. The latter stems from the interval [0, ec] where in the current case there is ec = 1k. The trinit-pτ combination 1 establishes an appropriate search behavior, at least relative to the combinations 3 and 4. In case of combination 1 the algorithm improves the intermediate solutions across the different rounds of the optimization and thresholds respectively. Furthermore, the thresholds induce several downturns of the curve signaling a certain ability of the algorithm to avoid getting stuck in local extreme points. During the last iterations, however, the downturns increasingly vanish since the thresholds converge to zero more and more preventing the acceptance of any worsening. The last two or three thresholds commonly lie too close to each other for a proper illustration wherefore the diagrams give the impression of only using eight or nine thresholds instead of ten.

111 Parameterization 2 represents another appropriate trinit-pτ combination. The algorithm moves through the solution space exhibiting stronger downturns compared to the previous trinit-pτ combination 1. Despite this fact, the thresholds still enforce a directed search process preventing the algorithm to walk randomly through the solution space. In such a case the algorithm would waste too much of the computational resources leading to unsatisfying approximations of the global optimum. The remaining parameter combinations 3 and 4 mainly improve the interim solutions during the first two thresholds. Therefore, the algorithm uses many optimization iterations quite ineffectively. Another weakness of these trinit-pτ combinations represents their lack of downturns indicating their low capabilities in escaping from local extreme points. The simple visual analysis cannot finally identify whether the parameter combination 1 outperforms 2 or vice versa. Nevertheless, the analysis clearly suggests preferring any of them to the parameter combinations 3 and 4. Therefore, the simple analysis successfully prevents using completely inappropriate parameters. The search could now continue and investigate the promising parameter combinations more closely by, for example, another eight restarts according to the outline from Figure 6.3.



trinit Figure 6.3: Potential refinement of the search for appropriate trinit-pτ combinations In the present case, however, the benefits from refining the search can be expected to be low. The refinement is likely to provide again more than one appropriate parameterization while the subsequent visual analysis will again be unsuitable for finally identifying the very best parameterization among them.

112 6.2.2 Comprehensive analysis on the basis of detailed grid structures Another possibility of identifying appropriate trinit-pτ combinations represents spanning up a much finer grid, for example a 19x19-grid, determining 400 potential trinit-pτ combinations by its knots. In order to evaluate the combinations each of them undergoes 60 restarts.104 The diagram from Figure 6.4 displays the best solution of each of the 400 restart groups giving an overview concerning the most promising trinit-pτ combinations.105

0.95

max µbank:

0.85 0.75

156.5-158

0.65

155-156.5 153.5-155

0.55

152-153.5

0.45

150.5-152

0.35

149-150.5

0.25



0.15 950

850

750

650

550

450

350

250

50

150

0.05

trinit

Figure 6.4: Investigation of 400 different trinit-pτ combinations each undergoing 60 restarts The investigation reveals a considerable amount of different parameter combinations in the upper half of the grid inducing solutions of similarly high quality levels according to the orange and turquoise area. The white marking indicates the parameter combination trinit = 300 and pτ = 0.75 performing best under the current random number sequence. However, the best performing combination changes slightly for different random number sequences and/or grid structures. Figure 6.5 illustrates the search behavior of the currently most promising parameter combination. 104

Gilli et al. (2006) also uses 60 and 64 restarts in context with an optimization problem of similar comprehensiveness. 105 Without the parallel computing facilities of the High Performance Computing Center Stuttgart (HLRS) in the form of a Nehalem cluster such parameterization approaches in the sense of a brute force approach would be impossible to conduct. These kinds of parameterizations take around 5 h in parallel computing including around 1k processors.

113

0

5: trinit = 300 / pτ = 0.75 155 -0.2

150 -0.4

145

-0.6

140 1

-0.8 3654

Figure 6.5: The best out of 60 restarts under trinit = 300 and pτ = 0.75 representing the current example’s most promising parameterization (achieving µbank = 157.67) The search behavior of the restart is very similar to that of the appropriate parameterizations 1 and 2. The location of the most promising parameterization as well as that of the orange area in Figure 6.4 clearly suggest the left upper field of the different trinit-pτ combinations to contain the best performing combinations. Reruns of the investigation using different random numbers confirm this presumption. However, this parameterization procedure causes considerable computational costs. For lower available computational resources, the simple procedure of visually analyzing the search behavior of several single restarts already performs well. The recent literature proclaims a rule of thumb for identifying appropriate search behaviors of the TA-algorithm.106 This rule demands the deltas according to line 4 of Algorithm 5.8 in context with the generation of the threshold sequence to feature a standard deviation of the same order of magnitude as the standard deviation of the accepted intermediate solutions of one restart. The rule stems from an approach using TA for portfolio optimization under a downside risk constraint applying further realistic integer constraints. There, appropriate values for the respective standard deviations, for example, represent 2.3 and 1.5. Applying the rule to the example case from Figure 6.5 reveals the standard deviations 1.4 and 17.5. Obviously, these deviations differ in their order of magnitudes.

106

See Gilli and Winker (2009), pp. 105-106.

114 The high deviation of 17.5 of the accepted intermediate solutions results from a specific characteristic of optimizing economic capital limits. During the first optimization iterations, the TA-algorithm uses the possibility of increasing the decision variables and economic capital limits respectively until the limits’ sum distinctly exceeds the amount of the available economic capital. Since the available economic capital just has to cover the resulting risk of the limit allocations, the algorithm can use this exceeding as long as the resulting solutions’ risk levels do not exceed the actually available capital. Finally, this possibility to expand the decision variables is due to diversification effects providing immense solution improvements during the first optimization iterations and thereby increasing the solutions’ standard deviation.107 Figure 6.2 and Figure 6.5 disregard these first iterations by displaying exclusively intermediate solutions above 140. Using only these visible solutions for the computation of the standard deviation reduces the deviation from 17.5 to the suitable value of 3.5. A possibility to level the deviations represents the use of start solutions already inducing maximum usage of the available investment and economic capital according to Algorithm 5.3. Finally, the rule of thumb should be used carefully. Under no circumstances, the rule replaces the proper visual analysis of the TA-algorithm’s search behavior. The relevant literature does not provide a universal method for the precise parameterization of TA. Instead, the TA’s parameterization strongly demands to consider the characteristics of the algorithm’s implementation and use. Figure 6.6 enables a closer examination of the so far trinit-pτ combinations’ performances. 1

2

3

4

5

1 0.8 0.6 0.4 0.2 0 133

135

137

139

141

143

145

147

149

151

153

155

157

Figure 6.6: Empirical distributions of µbank for 60 restarts each using the different trinit-pτ combinations 1 - 5 from Figure 6.2 and Figure 6.5

107

Diversification effects of course also occur in the case of common portfolio optimization. In contrast to economic capital allocation, however, there the sum of the decision variables remains unaffected at 100 % keeping the solutions’ standard deviation on a moderate level. In case of the present model, however, the sum of the decision variables often expands by 200 % to 400 %.

115 The empirical distributions of the trinit-pτ combinations 3 and 4 emphasize their inferiority. Parameterization 3 generates no solution with µbank > 150 while at least 20 % of the solutions of parameterization 4 achieve this quality level. In contrast, the parameterizations 1, 2 and 5 almost exclusively generate solutions achieving µbank > 150. Their empirical distributions lie very close together confirming their solutions similarity. The analysis cannot identify which of these three parameterizations finally performs best. For the current random number sequence and number of restarts the parameterization 1 provides the best approximation of the actual global optimum. More detailed information requires a distinctly higher number of restarts. However, the important finding arising from the trinit-pτ combinations’ empirical distributions represents the irrelevance of identifying exactly one particular parameterization for achieving reasonable solutions. Instead, Figure 6.4 illustrates a wider range of promising trinit-pτ combinations by the orange and turquoise area. This enables using also simple and rough parameterization procedures similar to that introduced at the beginning of the present chapter while still being able to achieve high quality solutions. Finally, the possibility of applying rather simple parameterization procedures arises from using the exponentially decreasing and randomized transfer value described by chapter 5.4.2. The following excursus emphasizes the advantages and disadvantages of this transfer value method. 6.2.3 Excursus on the impact of the transfer value generation on the parameterization In order to examine the impact of the transfer value generation on the parameterization of TA the following introduces two further methods for the generation of this value. The first method dispenses with decreasing the transfer value exponentially across the optimization iterations. As a result, the method exclusively includes the randomization according to the formula (6.1)

tri = 0.01 + (trinit – 0.01) · u where u~U[0, 1].

The second method even drops the randomization reducing the formula to (6.2)

tri = trinit where trinit ≥ 0.01.

Each method undergoes a detailed analysis on the basis of 20x20 grid searches testing 400 different trinit-pτ combinations on the basis of 60 restarts according to the analysis from Figure 6.4.

116

0.95

max µbank:

0.85 0.75

156.5-158

0.65

155-156.5

0.55

153.5-155



0.45

152-153.5

0.35

150.5-152

0.25

156.5, in the current example by 55.5 %. Tests on the basis of different model cases confirm these benefits of the decreasing transfer value. The current example also suggests the superiority of the decreasing and randomized transfer value concerning the closest approximation of the actual global optimum. More universal statements on this issue, however, would require further investigations. Finally, also the precise parameterization of the exponent exp inducing the transfer value’s decrease bears further potential for improving this transfer value method.

6.3 Effective combinations of restarts and steps 6.3.1 Appropriate coverage of the solution space The parameterization of the number of restarts nrestarts and the number of steps nsteps particularly addresses the TA-algorithm’s coverage of the complete solution space. In order to find the optimal coverage the parameterization has to induce the optimal trade-off between nrestarts and nsteps both restricted by the available computational resources ncomp in the form of (6.3)

ncomp ≥ nrestarts · niters = nrestarts · nrounds · nsteps.

Identic to the first best parameterization of trinit and pτ also identifying the first best parameterization concerning nrestarts and nsteps actually demands for a simultaneous determination of all TA parameters. As a consequence, the current successive parameterization reduces the probability of approximating the actual global optimum as close as possible for the available computational resources from the start. The following investigates on the best performing nrestarts-nsteps combination under the use of the so far best performing parameters trinit = 300 and pτ = 0.75. Different test combinations for nrestarts and nsteps arise from allocating the available computational resources ncomp among these parameters where (6.4)

 ∆t  ncomp =  wall  .  ∆t iter 

The floor of the division of the available (wall clock) time ∆twall by the runtime of one single iteration ∆titer determines ncomp finally representing the total number of available optimization iterations. The following exemplary parameterization of nrestarts and nsteps assumes (6.5)

 5h  ncomp =   = 1,200k .  0.015 s 

120 Table 6.1 displays exemplary nrestarts-nsteps combinations satisfying the constraint (6.3). Table 6.1: Different potential parameterizations for nrestarts and nsteps using ncomp = 1,200k, nrounds = 10, trinit = 300 and pτ = 0.75 A B C D E

n restarts 15 30 60 120 240

n steps 8k 4k 2k 1k 500

n iters 80k 40k 20k 10k 5k

For further analyses, Figure 6.10 lists the empirical distributions of the solutions of the parameterizations’ restarts. A

B

C

D

E

1 0.8 0.6 0.4 0.2 0 142

146

150

A

B

C

154

D

156.5

158

E

1 0.96 0.92 0.88 0.84 0.8 154

155

156

156.5

157

158

Figure 6.10: Empirical distributions of the restarts’ solutions µbank according to the parameterizations from table 6.1 The partition of the computational resources ncomp according to parameterization E clearly turns out less advantageous. The reduction of the number of restarts in favor of the number of steps, however, distinctly improves the TA-algorithm’s solutions µbank. The zoom from the lower diagram of Figure 6.10 reveals that parameterization D even generates one solution µbank exhibiting the highest quality level. Further reductions of the number of restarts in favor of the number of steps even induce further solution improvements according to the parameterizations B and C. Only parameterization A exhibits an overweight in steps causing

121 a significant decline of the solutions’ quality. As a consequence, the most promising parameterization lies between A and C, at least in order to approximate the actual global optimum as well as possible for the available computational resources and previous parameterization in the form of trinit = 300 and pτ = 0.75. Further tests confirm parameterization B being very appropriate in this context. Besides approximating the global optimum as well as possible, the optimization can also pursue the goal of generating solutions satisfying a minimum quality level. The present example uses a minimum quality level of 156.5. In case of parameterization B this objective enables the further reduction of the number of restarts and computational resources respectively. For example halving the number of restarts also roughly halves the expected number of solutions exceeding 156.5. Depending on the exact application of the TA-algorithm the corresponding certainty of satisfying the minimum quality level might be sufficient. As a result, the optimization would use only half of the computational resources and runtime. 6.3.2 Particular aspects of parallel computing A further important aspect of determining effective parameterizations for nrestarts and nsteps refers to the case of parallel computing changing the formula concerning ncomp according to (6.6)

 ∆t cpu  ncomp =   where ∆t cpu = ∆t wall ⋅ ncpu .  ∆t iter 

The available number of cpus ncpu and the available (wall clock) time ∆twall together determine the available cpu time ∆tcpu. The floor of its division by the runtime of one single iteration ∆titer determines ncomp again representing the total number of available iterations. For example in case of the parameterization B the use of ncpu = 30 reduces the runtime of ∆twall = 2.5 h to 5 min. Higher numbers of available cpus and the use of a minimum quality level open up further possibilities to reduce the runtime ∆twall. Using a less appropriate parameterization according to C or D in combination with ncpu = 60 or = 120 reduces the runtime considerably to ∆twall = 2.5 min or = 1.25 min. However, using a less appropriate parameterization also reduces the probability pλ of a single restart exceeding the minimum quality level λ = 156.5. The computation of the probability pλ uses a Bernoulli random variable z indicating whether a restart’s solution exceeds λ according to (6.7)

 1 for solutions ≥ λ z: .  0 else

For example in case of parameterization B, the probability pλ results from

122

(6.8)

p λ ,B = p ( z = 1) =

4 = 0.1 3 .108 30

Table 6.2 provides the respective probabilities for the different parameterizations. Table 6.2: Probability pλ of the different parameterization’s single restarts pλ

A 0

B 0.13

C 0.05

D E 0.0083 0

Probability pλ and the fact that the sum of all z-variables represents a binomial random variable in the form of x ~ B(nrestarts, pλ) enable the determination of probability πλ. Probability πλ describes the likelihood of a particular parameterization of generating solutions exceeding the minimum quality level. In case of parameterization B this probability results from (6.9)

π λ , B = 1 − (1 − pλ , B ) n

(6.10)

where (1 − p λ , B ) nrestarts = p ( x = 0) .109

restarts

= 1 − 0.8 6 30 = 0.986

Table 6.3 displays each parameterization’s probability πλ. Table 6.3: Probability πλ of the respective parameterization πλ

A 0

B 0.986

C 0.954

D 0.634

E 0

Instead, the reverse use of formula (6.9) provides the number of required restarts nrestarts in order to achieve a certain target value for probability πλ. In this case, the smallest positive integer for nrestarts inducing (6.11)

π λ ≤ 1 − (1 − pλ ) n

restarts

represents the required number of restarts. Table 6.4 lists the required numbers of restarts for πλ = 0.99 of each parameterization.

108

See Gilli and Winker (2009), p. 107 for the inspiring formulas concerning this and the following two formulas in context with the parameterization of the number of restarts in case of parallel computing. 109 The corresponding formula from Gilli and Winker (2009), p. 107 incorrectly reads π ≤ (1 – p)n instead of π ≤ 1 – (1 – p)n.

123 Table 6.4: Required number of restarts nrestarts and computational resources ncomp of the respective parameterization for πλ = 0.99 n restarts

A 0

B 33

C 90

D 551

E 0

n comp

0

1,320k

1,800k

5,510k

0

Provided that the number of available cpus does not represent the limiting factor, the execution of 551 restarts under parameterization D using 551 cpus distinctly reduces the runtime to ∆twall = 1.25 min without any losses in solution quality. However, this strategy extremely expands the required computational resources ncomp. In contrast, a moderate approach represents the execution of 90 restarts under parameterization C using 90 cpus just reducing the runtime to ∆twall = 2.5 min. The present example only considers several variants of parameterizations in order to illustrate the key aspects of properly determining nrestarts and nsteps. Finally, the exactly required parameterization depends on the particular circumstances of applying the TA-algorithm.

6.4 Concluding remarks on the parameterization for different model cases Tests show that the modification of the underlying model case in principle requires a re-parameterization of the TA-algorithm. This requirement’s strength depends on the modified model parameter and of course the modification’s strength. Potential parameters in this context represent those addressing the analysis of economic capital allocation, for example the number of applied business units. Furthermore, also modifications with regard to the configuration of the TA-algorithm cause the requirement for re-parameterization, for example the use of different starting solutions. In the common case, however, the present model dispenses with focusing too much on the re-parameterization in case of model case modifications. The present model neither aims at identifying the best approximation of actual global optima nor achieving solutions of one particular quality level. Instead, the optimal allocation of economic capital has to prove its superiority compared to alternative allocation methods. As soon as there are indications for the respective parameterization of being insufficient for realizing certain analyses successfully it undergoes the necessary adjustments according to the contents of chapter 6.

125

7

SUPERIORITY OF OPTIMAL ECONOMIC CAPITAL ALLOCATION – THE INFORMED CENTRAL PLANNER

7.1 Introduction to the benchmarking of allocation methods in case of an informed central planner The analysis addresses the superiority of optimal economic capital allocation compared to alternative allocation methods. A key aspect of this analysis represents to show the effectiveness of an optimal allocation despite the instability of the underlying correlations. In context with the present model’s economic capital allocation the correlations are inevitably unstable. This is due to the fact that the capital assignments finally represent decision rights. These decision rights enable the respective addressees to autonomously build long or short investment positions according to their private information within the scope of their assigned rights. The decentralized autonomous decision making guarantees the bank’s utilization of its employees’ special knowledge. An completely central management of the bank’s investment portfolio without the delegation of investment decisions would leave these benefits unused. As a further consequence the bank would not employ any trader at all since the central management would take every investment decision by itself. However, in reality banks employ decentralized decision makers in the form of traders obviously representing the more profitable banking approach.110 Before this background, investigating the optimal allocation of economic capital under consideration of unstable correlations appears highly relevant. The present analysis assumes optimal conditions. These optimal conditions consist in an informed central planner exactly knowing the capital addressees’ skills and private information respectively. Within the model the different skill levels represent different probabilities of success. Additionally the central planner knows the further behavior of the capital addressees: The addressees exclusively take investment decisions dependent on their individual skills. This excludes for example influences potentially arising from observing the investment decisions of their colleagues. 110

See also Burghof and Sinha (2005), p. 50 for this argumentation in context with VAR-limit systems, informational cascades and herding.

126 The present analysis uses alternative economic capital allocation methods for comparative purposes. The alternative methods represent obvious allocation schemes denoted in the following by „expected return“, „uniform“ and „random“. The first alternative assigns economic capital exclusively to business units with positive return expectations proportional to the respective expectation. The allocation scheme completely disregards business units exhibiting negative return expectations leaving them without any capital assignments. In contrast, the second alternative provides the business units with an identic amount of economic capital while the third one allocates the capital completely randomly among the units. However, since the first alternative disregards business units exhibiting negative return expectations, the second and third alternative behave identic to that extent. Otherwise, their lack in competitiveness compared to the first alternative would be considerable from the start. Another means to establish a basic competitiveness among all the different allocation methods, inclusive the optimal allocation, represents the scaling of the alternatives’ limit allocations according to Algorithm 5.3. The alternatives exclusively replace the first line of the algorithm by their own allocation schemes. The rest of Algorithm 5.3 then establishes the alternative allocations’ maximum use of the available investment capital cbank and economic capital ec. Without this scaling the alternatives would not represent serious alternatives to the optimal allocation. The methods also use an identic risk assessment for reasons of competitiveness. Since the optimal allocation assesses risk on the basis of a comprehensive trading simulation concerning the whole model bank this also applies for the alternative allocation methods. This risk assessment contrasts with the classic risk assessment known from portfolio management commonly exclusively using returns from financial instruments held by the bank in seemingly constant portfolios. The allocation methods could also use this classic risk assessment. In this case, however, there arises the question which portfolio constellation the risk assessment should assume for the respective holding period addressed by the assessment. Within the present model of allocating VAR limits in the sense of decision rights the portfolio constellation obviously changes constantly. As a consequence, in order to reliably prevent the economic capital allocations to regularly violate the confidence level, the classic risk assessment had to assume the worst-case portfolio constellation by default. The worst case describes the state where all decision makers build positions of the same trading direction. The present analysis, however, disregards the classic risk assessment and exclusively applies the risk assessment on the basis of m trading simulations. The following at first investigates on the superiority of the optimal allocation by using an arbitrary model bank. Subsequently, the model bank experiences sever-

127 al modifications in order to enable more precise tests on the superiority of the optimal allocation scheme. Thereto, at first a particular capital ratio establishes a level playing field, at least for the optimal allocation and the most promising alternative allocation scheme. Further investigations establish further additional model modifications like limit setting restrictions through minimum limits, less privately informed traders and a higher degree of diversification by increasing the model bank’s number of business units nunits. These successive analyses build up on each other and provide a detailed insight concerning the superiority of the optimal allocation scheme in case of an informed central planner.

7.2 Benchmarking of allocation methods in case of an informed central planner 7.2.1 Allocation methods’ performances before the background of an arbitrary model bank The following compares the allocation methods TA, expected return, uniform and random whereas the terms “TA” and “optimal allocation” represent synonymous expressions. In order to be able to compare the methods properly, the methods initial conditions have to be identic. The scaling of the alternative limit allocations by Algorithm 5.3 and the identic risk assessment of all allocation methods already contributes substantially to the leveling of the conditions. Additionally, the present analysis uses an identic model bank with ec = 1k and cbank = 150k in order to test the different methods. These values for ec and cbank correspond to a common annual capital ratio of about 10.6 % according to the rough approximation by the square-root-of-time-rule (7.1)

ect ⋅ T 1k ⋅ 252 = ≈ 0.106 . 150k c bank

Furthermore, the model bank consists of nunits = 50 business units featuring an average probability of success of p⌀ = 0.55. The determination of the single probabilities pj follows formula (4.38). Each business unit of the model bank trades in one particular stock whose returns stem from a GBM which in turn uses standard deviations and correlations of historical stock returns. For the historical stock returns the model uses the S&P 500 index from August 9, 2010 to August 5, 2011.111 Finally, this model bank’s configuration represents an arbi111

The model uses the stocks according to the alphabetic order of their company names and refers to the index components included at July 5, 2011. For example an analysis using 50 stocks and business units respectively uses the returns of the first 50 stocks of the index beginning with company names starting with the letter A (see the stocks’ list from Appendix 1). The used stock quotes stem from www.nasdaq.com.

128 trary one just meeting the demand of being fundamentally appropriate for the present examinations. The model bank then allocates its available economic capital for the upcoming trading day according to each of the different allocation methods. The corresponding m trading simulations and the subsequent “real” trading in the form of an out-of-sample test follow the descriptions of chapter 4. Under the current conditions the allocation methods create the following different limit allocations concerning the upcoming trading day. 1000

500

0

Figure 7.1: Limit allocations according to the TA, expected return, uniform and random method using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k, arranged according to the business units’ in-sample expected return The order of each allocation’s single limits follows the business units’ individual return expectations during the trading simulation. Therefore, the first limit of each diagram from Figure 7.1 belongs to the unit exhibiting the highest return expectations. The TA method and the expected return method provide this unit with the largest share of the bank’s economic capital. However, the TA method provides the two business units featuring the highest return expectations with much higher limits as the expected return method. Additionally, the TA method provides units with lower return expectations partly with higher limits than units with higher return expectations. This becomes obvious by the irregular decrease of the limits’ extent contrasting with the steady decrease in case of the expected return method. The uniform method provides the business units with maximum identic limits. Since every unit features positive return expectations none of the units remains without any capital assignment. The greyish colouring of the random method’s limits characterizes these limits as exemplary ones. This is due to the fact that the random method, in contrast to the remaining methods, generates a new random limit allocation per iteration of the trading simulation. Table 7.1 lists the in-sample results of the model bank for each allocation method using m = 20k simulation iterations.112

112

Note that the data does not assert the claim of being realistic. For example under p⌀ = 0.55 the return expectations regularly turn out unrealistically high. However, the present model exclusively focusses on the compu-

129 Table 7.1: In-sample results using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k µ bank / σ µ bank E(c bank ) / σ c bank

TA 512 688

expected return 370 464

uniform 222 348

random 223 403

149 048

352

149 254

220

149 393

171

149 346

204

VAR bank / β

1 000

0.99

662

0.999

566

0.999

692

0.998

ES bank / σ ES

1 310

281

842

213

703

146

866

179

Σ vl / d vl

5 618

5.62

5 515

8.34

5 227

9.23

5 228

7.56

0.51

0.51

0.37

0.56

0.22

0.39

0.22

0.32

E(RORAC ): ec / VAR

According to the in-sample results the TA method clearly outperforms the remaining methods. The TA method achieves an expected profit of µbank = 512 exceeding the profit expectations of the alternative methods by far.113 However, the expected profit of the TA method at the same time exhibits the strongest deviation of σµbank = 688. Furthermore, the deviation of the uniform method’s profit expectation falls below that of the random method. Although not really relevant, this expected finding helps confirming the computational correctness of the model. Concerning the invested capital, the present model bank induces very similar expectations, no matter which allocation method. The corresponding deviations are low and finally irrelevant. In contrast, the allocation methods induce distinctly differing bank-wide risk levels. Exclusively the TA method uses the complete available economic capital according to ec = VARbank = 1k. The alternative allocation schemes leave considerable shares of this important resource unused. While the TA method exactly satisfies the required confidence level according to β = 0.99, the alternative methods induce inefficiently high confidence levels of β > 0.99. The limit allocation of the TA method causes the highest expected shortfall of ESbank = 1,310. However, this appears comprehensible since the alternative methods induce much lower risk levels from the start. Variable Σvl represents the sum of a limit allocation’s single limits. Obviously, the corresponding values exceed the respective amounts of the used economic capital in the form of VARbank by far. The diversification factor dvl measures the corresponding diversifications according to

113

tational correctness and the examination of the fundamental resulting effects without any interest in generating values which are as realistic as possible. The TA method performs best for start solutions according to the expected return method. In fact, the TA method also constantly outperforms the alternatives by using random start solutions according to formula (5.12) and Algorithm 5.3. However, the TA method even performs better for start solutions according to the expected return method, in particular for numbers of business units > 100. As a consequence, from now the TA method uses start solutions on expected return basis by default.

130

(7.2)

d vl =

Σ vl . VARbank

The enormous diversification levels are due to the analysis’s underlying risk assessment. The risk assessment distinctly differs from the classic one. Classic risk assessment known from portfolio management inevitably requires the worstcase assumption concerning the future portfolio constellation if applied to the present model. In this case the diversification levels would turn out much lower. Instead, the current risk assessment uses m trading simulations in order to reliably approximate the bank-wide portfolio constellation during the holding period. Thereby, the current risk assessment anticipates the unstable correlations optimally instead of using the save but also very inefficient worst-case assumption. Through the m trading simulations the risk assessment bases on the correlations of the business units’ returns instead on those of the stocks’ returns. In the present case, however, the units’ correlations are positive and negative and, in particular, very close to zero. This is due to the fact that the present model case assumes the traders to exclusively act according to their individual skills without considering any other influences. Other influences could e.g. arise from rational reasons for the imitation of the colleagues’ trading decisions. During the current analysis the TA method achieves the lowest diversification since the method generates rather unbalanced limit allocations reducing the share of natural diversification effects. Finally, table 7.1 lists the expected RORAC of the model bank. The model distinguishes between two different RORACs according to µ bank ec

(7.3)

E( RORAC ec ) =

(7.4)

and E( RORACVAR ) =

µ bank . VARbank

In particular the first RORAC represents the relevant measure for comparing the performances of different capital allocation schemes. This is due to the fact that the construction of the first RORAC penalizes low usage rates concerning the available economic capital ec. At best, however, an allocation method achieves maximum values for both RORACs. In the present case the TA method achieves the highest E(RORACec) and the expected return method the highest E(RORACVAR). In order to properly compare the methods’ values for E(RORACVAR), however, each method should fully use the available resources. A corresponding examination provides the subsequent chapter addressing the precise benchmarking on the basis of particular capital ratios. The histograms of the allocation methods in Figure 7.2 illustrate the frequencies of plbank-realizations on the basis of m = 20k out-of-sample simulations each.

131 TA

expected return

uniform

random

0.05

skewness ν kurtosis ω

TA 0.219 1.022

expected return uniform random 0.345 0.353 0.310 1.266 1.257 1.262

0.025

0 -2500

-2000

-1500

-1000

-500

0

500

1000

1500

2000

2500

3000

3500

4000

Figure 7.2: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k According to the histograms the profit expectations of the TA method clearly outperforms those of the remaining methods. Compared to the alternatives, the TA method successfully transfers probability mass from the realizations up to 500 to realizations above 500. This benefit compensates the TA method’s slightly fatter left tail making losses up to about -1k more frequent than in case of any other method. Table 7.2 provides the full out-of-sample results of the model bank for each allocation method during the 20k realizations. Table 7.2: Out-of-sample results using ec = 1k, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank

461

610

expected return 333 412

uniform 201 311

random 203 360

148 615

292

148 882

192

148 794

153

148 749

256

VAR bank / β

960

0.992

592

0.999

493

0.9998

625

0.999

ES bank / σ ES

1 217

271

772

182

632

147

773

156

Σ vl / d vl

5 618

5.85

5 515

9.32

5 227

10.59

5 228

8.37

0.46

0.48

0.33

0.56

0.20

0.41

0.20

0.32

E(RORAC ): ec / VAR

The out-of-sample results confirm the effects already known from the in-sample results. The most relevant findings represent the TA method achieving the maximum profit expectation, the alternative methods’ low usage rate of the available economic capital and the expected return method achieving the maximum E(RORACVAR). Basically, the results clearly reveal the superiority of the TA method. This superiority arises in particular from the TA method’s ability of anticipating the model bank’s given resources and the resulting full use of the available investment capital cbank and economic capital ec. In order to achieve full usage of the available economic capital with alternative methods, either the available economic capital ec had to decrease or the investment capital cbank had to increase considerably. In case of the expected return method, for example, a decrease of the

132 economic capital to ec = 662 would be necessary. As a consequence, the capital ratio would inevitably decrease, according to the rough approximation by formula (7.1), from 10.6 % to 7 %. In practice, however, regulatory aspects prevent such adjustments anyway. In contrast, a feasible adjustment represents increasing the risk of the respective limit allocations. In case of the expected return method, for example, the allocation method could provide business units featuring higher return expectations with disproportionately larger shares of the economic capital and units featuring lower return expectations with disproportionately smaller ones. The method could redistribute the economic capital accordingly until the limit allocation’s risk exactly corresponds to the available amount of economic capital. In contrast to the expected return method, the uniform and the random method do not really offer natural approaches to increase their risk. In their case, redistributing economic capital to e.g. the business units with the highest return expectations destroys the characteristic properties of these methods. The current analysis reveals the limitations of the expected return method concerning the full use of the available economic capital. Comparing the TA and the expected return method precisely with each other, however, demands for as similar as possible levels of resource usage of both methods. Thereto, since the present model does not focus on regulatory aspects, the model does not extend the expected return method by a limit redistribution procedure. Instead, for reasons of simplification, the model reduces the available economic capital of the model bank to ec = 662 representing the maximum risk level of the expected return method. 7.2.2 Precise benchmarking on the basis of particular model settings 7.2.2.1 Implementation of a level playing field The present chapter aims at establishing a level playing field between the allocation methods resource usages concerning the economic capital ec and the average invested capital cbank.114 Thereto, the investigation reuses the previous model case concerning an arbitrary model bank. Exclusively the available economic capital of the model bank decreases to ec = 662.

114

See also Burghof and Müller (2013b) for the analysis of chapter 7.2.2.1 additionally providing some technical information on typical computations in context with the investigations on optimal economic capital allocation in banking on the basis of decision rights.

133 500

250

0

Figure 7.3: Limit allocations according to the TA, expected return, uniform and random method using ec = 662 , cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k, arranged according to the business units’ in-sample return expectations As a result of the ec-adjustment, the limit allocations of the TA and the expected return method in Figure 7.3 exhibit fundamentally similar limit structures. However, compared to the expected return method the limits of the TA method still feature an irregular decrease from the very successful to the less successful business units. The limit allocations of the uniform and the random method remain unchanged compared to the previous model case. Table 7.3: In-sample results using ec = 662, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES

396

494

expected return 370 464

uniform 222 348

random 212 382

149 157

256

149 253

220

149 393

171

141 923

6 522

662

0.99

662

0.99

566

0.995

654

0.9906

896

225

842

213

703

146

818

173

Σ vl / d vl

5 376

8.13

5 515

8.34

5 227

9.23

4 965

7.59

E(RORAC ): ec / VAR

0.599

0.599

0.560

0.560

0.336

0.392

0.320

0.324

According to the in-sample results from table 7.3the reduction of the available ec of the present analysis causes the TA and the expected return method to induce the same risk levels of VARbank = 662 and also to induce very similar levels concerning the average invested capital E(cbank). Despite this level playing field, the TA method still induces the highest profit expectations µbank of all allocation methods. However, the previous surplus of 38.4 % compared to the expected return method now declines to 7 %. The level playing field also induces the TA method to outperform the expected return and the remaining methods in both RORAC categories. The level playing field restricts itself to the most promising methods in the form of the TA and the expected return method. A perfect level playing field can always only exist between two methods at once. For example, the reduction of the economic capital to the maximum risk level of the uniform method according to ec = VARbank = 566 would establish a perfect level playing field between the TA and uniform method but at the same time destroy that between the TA and the expected return method. Tests show that under ec = 566 the average invested

134 capital of the expected return method would decline. As a consequence, establishing a level playing field concerning the present and upcoming investigations always restricts itself to the most promising allocation methods in the form of the TA and the expected return method. Nevertheless, the uniform and the random method provide valuable information, even without a perfect level playing field. Their results allow for e.g. important plausibility checks supporting verifying the computations’ general correctness. The reduction of the economic capital causes the invested capital of the random method to decline by 5.2 % in average and experience a 32 times stronger deviation. The standard deviation’s extreme increase is due to the random method’s characteristic of creating a new limit allocation for each of the m simulation iterations. The investigation’s reduction of the economic capital causes the feasible amounts of the invested capital to deviate relatively strong during the m simulation iterations. TA

expected return

uniform

random

0.05

skewness ν kurtosis ω

TA 0.336 1.224

expected return uniform random 0.345 0.353 0.310 1.266 1.257 1.252

0.025

0 -2000

-1500

-1000

-500

0

500

1000

1500

2000

2500

3000

3500

Figure 7.4: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 662, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k The histograms of the methods’ out-of-sample performances from Figure 7.4 using 20k realizations confirm the first impressions from table 7.4. The distribution of the profits and losses of the TA and the expected return method assimilate for the current case of a level playing field. However, compared to the expected return method the TA method redistributes probability mass from the middle of its distribution to the right tail. Also for the current model case, the histograms of the less sophisticated methods distinctly differ from those of the sophisticated methods.

135 Table 7.4: Out-of-sample results using ec = 662, cbank = 150k, nunits = 50, p⌀ = 0.55 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES

356

439

expected return 333 412

uniform 201 311

random 193 341

148 804

212

148 880

192

148 794

153

141 354

6 474

628

0.992

592

0.993

493

0.997

589

0.994

823

194

772

182

632

147

733

152

Σ vl / d vl

5 376

8.56

5 515

9.32

5 227

10.59

4 965

8.42

E(RORAC ): ec / VAR

0.538

0.566

0.504

0.563

0.304

0.408

0.292

0.327

Finally, the complete out-of-sample results from table 7.4 confirm the in-sample results. Despite the level playing field, the TA method still outperforms the profit expectations of the expected return method. However, compared to the previous model case this surplus decreases from 38.4 % to 6.9 %. The TA method also achieves a higher risk level during the out-of-sample testing compared to its direct competitor in the form of the expected return method. While the TA method causes VARbank = 628 the limit allocation of the expected return method only induces VARbank = 592. Although there is no exact level playing field between the sophisticated and less sophisticated methods during the current investigation, the out-of-sample results clearly confirm their inferiority. 7.2.2.2 Impact of restrictions through minimum limits The previous model cases clearly confirm the superiority of the optimal allocation method. During these cases all allocation methods except for the uniform method provide several business units with extremely small limits. The current investigation changes this by introducing minimum limits. Thereby, the investigation examines whether the superiority of the optimal allocation method is only based on the unrealistic possibility to allocate even infinitesimal limits. Business units can be temporarily less successful. Nevertheless, these less successful units need to hold their customers and defend their market shares as long as the bank did not close these business units down. Thereto, these underperforming units require certain amounts of economic capital. Also in this point, the problem of optimal economic capital allocation differs from that of portfolio selection. At least, as long as the allocation of economic capital does not additionally addresses the question which business unit to hold and which to close down as in context with the present approach. The current investigation again reuses the model case of the previous chapter and additionally introduces the minimum limit vlmin = 50. This value orientates by the so far experienced sums of the limits lying around Σvl ≈ 5k. As a consequence, in case of nunits = 50, about half of the limits’ sum now become minimum limits. The only further adjustment of the present model case represents

136 the reduction of the economic capital of ec = 662 to ec = 550. This reduction at least establishes a level playing field concerning the resource usage between the TA and the expected return method. 400

200

0

Figure 7.5: Limit allocations according to the TA, expected return, uniform and random method using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k, arranged according to the business units’ in-sample expected returns The limit allocations in Figure 7.5 clearly reveal the use of the minimum limits, except for the uniform allocation. Nevertheless, the characteristics of the TA method’s limit structure remain basically the same. The method still allocates more economic capital to most successful business units than the expected return method. Furthermore, compared to the expected return method, it still exhibits irregularly decreasing limit sizes towards the less successive business units at the right edges of the diagrams from Figure 7.5. Table 7.5: In-sample results using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank

328

411

expected return 294 380

uniform 216 338

random 197 332

145 610

208

145 771

181

145 112

166

132 222

4 181

VAR bank / β

550

0.99

550

0.99

550

0.99

549

0.99

ES bank / σ ES

744

184

703

169

683

141

691

143

Σ vl / d vl

5 145

9.35

5 253

9.55

5 077

9.23

4 625

8.42

E(RORAC ): ec / VAR

0.596

0.596

0.534

0.534

0.392

0.392

0.358

0.358

According to the in-sample results from table 7.5 the introduction of the minimum limits even enables the TA method to build up its lead in performance compared to the expected return method and the remaining methods. The TA method’s outperformance concerning the expected return method now lies at 11.6 % compared to 7 % from the previous model case. The TA method also outperforms the rest of the methods concerning both RORACs. For the current model case the resources usage of the uniform method is very similar to that of the TA and the expected return method accidentally establishing a level playing field among these three methods. Nevertheless, the differences in the perfor-

137 mance between the sophisticated and less sophisticated methods remain considerable. TA

expected return

uniform

random

0.05

skewness ν kurtosis ω

TA 0.362 1.296

expected return uniform random 0.385 0.353 0.330 1.331 1.257 1.262

0.025

0 -1500

-1000

-500

0

500

1000

1500

2000

2500

Figure 7.6: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k Figure 7.6 shows the out-of-sample results in the form of histograms concerning plbank using 20k simulations each. The TA method even more clearly redistributes probability mass from its distribution’s middle to the right tail. As a consequence of the level playing field between the sophisticated methods and the uniform method the histograms show the differences between these methods in a completely unbiased manner. Table 7.6: Out-of-sample results using ec = 550, cbank = 150k, nunits = 50, p⌀ = 0.55, vlmin = 50 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES

295

366

expected return 265 338

uniform 195 302

random 179 297

145 182

168

145 304

160

144 531

149

131 692

4 148

531

0.991

498

0.993

479

0.994

496

0.994

683

159

630

144

614

143

621

134

Σ vl / d vl

5 145

9.69

5 253

10.54

5 077

10.59

4 625

9.32

E(RORAC ): ec / VAR

0.536

0.555

0.482

0.532

0.355

0.408

0.326

0.361

Also in case of the present analysis, the complete out-of-sample results from table 7.6 confirm those of the in-sample computations. The most important finding again represents the superiority of the TA method compared to all other methods. The TA method’s surplus concerning the expected profit µbank compared to the expected return method increases under the use of minimum limits from 6.9 % to 11.3 %. Also concerning the RORACs the TA method now outperforms the remaining methods clearly. The current model case features perfect comparability between the sophisticated methods and the uniform method. As a

138 consequence, in the present model case the model bank can increase its expected profit by 51.3 % by using the TA instead of the uniform method. 7.2.2.3 Relevance of optimal allocation in case of less privately informed traders The minimum limits from the previous model case even lead to an extension of the TA method’s superiority. The present model case investigates the TA method’s superiority on the basis of further modifications inducing more realistic conditions. Obviously, the average probability of success of the traders p⌀ = 0.55 of the previous model cases represents an unrealistically high one. During the examinations the terms “probability of success”, “private information” and “skills” represent synonymous expressions. A rough approximation of the annual expected return of the model bank concerning the model case of the previous chapter reveals an unrealistically high value of about 50 %. Therefore, the current investigation addresses the impact of a drastic reduction of the traders’ skills from p⌀ = 0.55 to p⌀ = 0.51. In order to reduce the average skill level from p⌀ = 0.55 to p⌀ = 0.51 the present model case transforms each skill level pj from formula (4.38) into pj adjusted according to (7.5)

pj adjusted = 0.2(pj + 2).

Figure 7.7 illustrates the effect of formula (7.5) on the skill levels’ distribution.

p⌀ = 0.55

0.5

0.5

p⌀ = 0.51

1

1

0.5

0.5

0.6

0.6

Figure 7.7: Adjustment of the skill levels’ distribution from p⌀ = 0.55 to p⌀ = 0.51 Therefore, the current investigation just reuses the initial PDF from Figure 4.2 on the new interval of pj-values of [0.5, 0.6]. The only further adjustment, compared to the previous model case, represents the increase of the model bank’s available economic capital from ec = 550 to ec = 901 in order to establish a level playing field between the TA and the expected return method.

139 400

200

0

Figure 7.8: Limit allocations according to the TA, expected return, uniform and random method using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k, arranged according to the business units’ in-sample expected returns According to Figure 7.8 the reduction of the average probability of success does not induce fundamental changes among the limit allocations. The TA method still passes larger shares of the economic capital to the best performing traders than the expected return method. Under the new model case, however, the structure of the uniform allocation reveals business units with negative return expectations. Table 7.7: In-sample results using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank

65.66

425

expected return 62.77 407

uniform 45.31 347

random 45.11 377

149 285

213

149 285

191

149 333

176

149 030

1 030

VAR bank / β

901

0.99

901

0.99

788

0.995

866

0.992

ES bank / σ ES

1 178

259

1 125

233

961

183

1 038

176

Σ vl / d vl

5 338

5.92

5 413

6.01

5 171

6.56

5 164

5.97

E(RORAC ): ec / VAR

0.073

0.073

0.070

0.070

0.050

0.057

0.050

0.052

Despite the considerable reduction of the traders’ skills the in-sample results from table 7.7 again confirm the TA method as the superior one. Nevertheless, its surplus in expected profit compared to the expected return method decreases from 11.6 % to 4.6 %. The TA method also again achieves the highest RORACs. In general, the expected profits turn out much lower compared to the previous model cases with highly skilled traders. The resource usage of the random method comes very close to that of the sophisticated methods. Nevertheless, this does not change the clear inferiority of the random method compared to the sophisticated ones.

140 TA

expected return

uniform

random

0.05

skewness ν kurtosis ω

TA 0.098 0.954

expected return uniform random 0.116 0.107 0.151 1.018 0.930 0.915

0.025

0 -2500

-2000

-1500

-1000

-500

0

500

1000

1500

2000

2500

Figure 7.9: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k The out-of-sample distributions of plbank from Figure 7.9 hardly confirm the superiority of the TA method compared to the expected return method. Only the differences between the sophisticated and less sophisticated methods still become evident, despite the lower skill level of the traders. Table 7.8: Out-of-sample results using ec = 901, cbank = 150k, nunits = 50, p⌀ = 0.51, vlmin = 50 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank

58.98

379

expected return 56.36 363

uniform 41.75 309

random 43.43 339

148 738

192

148 836

176

148 714

157

148 415

1 013

VAR bank / β

857

0.992

823

0.994

697

0.998

762

0.996

ES bank / σ ES

1 041

196

993

188

850

172

913

152

Σ vl / d vl

5 338

6.23

5 413

6.57

5 171

7.42

5 164

6.78

E(RORAC ): ec / VAR

0.065

0.069

0.063

0.068

0.046

0.060

0.048

0.057

According to the complete out-of-sample results from table 7.8 the TA method nevertheless outperforms the other methods. The TA method’s expected profit still outperforms that of the expected return method by 4.7 % compared to 11.3 % before the reduction of the traders’ skills. Also the RORACs of the TA method lie slightly above those of the expected return method. The results indicate that the model bank can still considerably increase its expected profit by using the optimal allocation scheme instead of a random one. This can also be considered to hold true despite the lack of a perfect level playing field between the sophisticated methods and the random method. The current model case identifies an increase of 35.8 % if the model bank applies the optimal instead of the random allocation method.

141 7.2.2.4 Influence of higher degrees of diversification in the form of higher numbers of business units The final investigation for the case of an informed central planner addresses the influence of higher degrees of diversification of the model bank on the superiority of the optimal allocation scheme. The so far investigations use a model bank with nunits = 50 business units and risk drivers respectively. With regard to large banks this surely does not represent an excessively high number. At least for the imagination of the present work understanding capital addressees rather as smaller units managing smaller portfolios from the credit and market risk sector. As a consequence, the following investigation uses a model bank featuring a higher number of business units. Thereto, the current investigation again basically reuses the model case from the previous chapter. Only the number of business units nunits increases from 50 to 200.115 Furthermore, the higher number of business units requires adjusting the minimum limit from vlmin = 50 to vlmin = 12.5 in order to again assign approximately half of the limits sum Σvl in the form of minimum limits. Finally, the available economic capital of the model bank decreases from ec = 901 to ec = 417 in order to at least establish a perfect level playing field between the TA and the expected return method. 100

50

0

Figure 7.10: Limit allocations according to the TA, expected return, uniform and random method using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k, arranged according to the business units’ in-sample expected returns In general, the limit structure of the TA method from Figure 7.10 basically resembles the corresponding structure from the previous model case with nunits = 50. During the current investigation, however, the TA method assigns the economic capital even more differentiated to the different business units. In particular, the method even assigns the highest limit to a trader which does not represent the most successful one. As a consequence of this increase in differentiation, the allocations of the TA and the expected return method exhibit less similarities. The remaining limit allocations exhibit the expected structures.

115

The increase of the number of business units demands for an adjustment of the IS factor ζbank = 1.11 to ζbank = 1.13. For IS issues see chapter 4.4.

142 Table 7.9: In-sample results using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k TA µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES Σ vl / d vl E(RORAC ): ec / VAR

60.14

expected return 54.05 201

220

uniform 41.98 184

random 40.43 193

149 547

103

149 651

87

149 638

89

143 225

2 340

417

0.99

417

0.99

398

0.992

417

0.9902

547 5 433 0.144

114 13.02 0.144

511 5 477 0.130

85 13.12 0.130

478 5 289 0.101

71 13.30 0.106

498 5 062 0.097

79 12.15 0.097

According to the in-sample results from table 7.9 the higher diversified model bank distinctly increases the superiority of the optimal allocation scheme. The TA method’s surplus in expected profit compared to the expected return method increases from 4.6 % to 11.1 %. The optimal allocation scheme also provides the highest RORACs. The TA method distinctly outperforms the alternative allocation schemes. TA

expected return

uniform

random

0.05

skewness ν kurtosis ω

TA 0.139 1.130

expected return uniform random 0.150 0.133 0.094 1.097 1.094 1.121

0.025

0 -1000

-500

0

500

1000

1500

Figure 7.11: Histograms of the allocation methods’ out-of-sample results concerning plbank using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k The out-of-sample histograms from Figure 7.11 fully reveal the balancing effect of the higher number of business units. Most of the histograms’ 20k realizations now lie between -500 and 500 instead of -1,000 and 1,000. In general, the histograms are very similar for the case of nunits = 200. Nevertheless, the distribution of the optimal allocation scheme still clearly exhibits the strongest reallocation of probability mass from its middle to its right tail.

143 Table 7.10: Out-of-sample results using ec = 417, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k TA µ bank / σ µ bank E(c bank) / σ c bank VAR bank / β ES bank / σ ES Σ vl / d vl E(RORAC ): ec / VAR

50.54

192

expected return 45.83 176

uniform 36.25 160

random 34.95 168

149 422

90

149 568

83

149 517

89

143 108

2 325

412

0.9904

377

0.993

354

0.996

376

0.993

510 5 433 0.121

92 13.18 0.123

465 5 477 0.110

82 14.52 0.122

433 5 289 0.087

79 14.94 0.102

467 5 062 0.084

89 13.46 0.093

Also in case of a higher diversified model bank the complete out-of-sample results from table 7.10 confirm those of the in-sample computations. For the current model case, the superiority of the TA method increases again. Its surplus in expected profit compared to the expected return method increases from 4.7 % to 10.28 %. The TA method also achieves the highest RORACs while the VAR based RORAC only slightly outperforms that of the expected return method. Finally, the relevance of an optimal economic capital allocation increases for an increasingly diversified model bank. In the present model case, the model bank can increase its expected profit by 39.4 % or 44.6 % by using an optimal instead of a uniform or random allocation.

7.3 Discussion of the superiority of optimal economic capital allocation The investigations on the basis of an informed central planner confirm the optimal allocation method of economic capital as the superior one. The method manages to maximize the expected profits under consideration of portfolio theoretical aspects and at the same time properly takes into account the delegation of the final investment decisions to the decentralized autonomous decision makers. The order of the successive model cases of the present investigation, and the cases themselves, however, are subjective. For example, the cases only consider one kind of minimum limit without examining a whole range of minimum limitvariations. Despite the subjectivity the resulting order and selection of model cases appears appropriate in order to examine the basic characteristics of the different allocation schemes. The model cases aim at successively creating to some extent increasingly realistic settings. From this point of view, the findings from the final model case of chapter 7.2.2.4 represent the most relevant ones. The corresponding findings clearly suggest the superiority of the optimal allocation

144 scheme even more than the findings of most of the previous model cases featuring less realistic settings. In contrast to certain necessary subjectivities, the investigation exclusively considers objective alternative methods of economic capital allocation in order to compare them with the optimal one. The restriction to these also very fundamental allocation methods is due to the fact that additionally considering many other alternative methods from the finally infinite amount of subjective allocation methods appears ineffective. Nevertheless, as a consequence of this restriction, the investigation cannot confirm the universal superiority of the optimal allocation scheme. At the same time, the investigation’s findings also do not necessarily suggest that there are any other allocation methods potentially outperforming the optimal allocation scheme. Improving the underlying algorithm for global optimization very likely represents the only possibility in order to create an alternative allocation method capable of outperforming the current optimal one. Admittedly, so far, the optimal allocation’s superiority can also be exclusively due to the use of optimal conditions. These optimal conditions above all consist in the use of an informed central planner knowing the probabilities of success of the different potential capital addressees. Additionally, the model assumes the decision makers to exclusively follow their individual skills in order to build investment positions. They do not orientate by their colleagues’ decisions or a general market trend. As a consequence, the correlations between the business units’ returns take values very close to zero. Nevertheless, this rather monotonous correlation values not necessarily support the superiority of the optimal allocation method very much. In contrast, in situations with stronger correlations of the trading decisions the optimal allocation method might become even more advantageous and therefore cause even further increases of the superiority of the optimal method, a satisfyingly informed central planner provided. Another key assumption of the so far model consists in the unconditional full use of the economic capital assignments by the traders. The full limit usage finally enables the transmission of the central planner’s business strategy through the limit allocation as frictionless as possible. Without full limit usage the control of the central planner would be much less precise and effective. However, the assumption of the full limit usage will be kept by the upcoming model extensions but issues on the full limit usage represent an interesting field for further investigations. The upcoming model extension restricts itself to at first replace the informed central planner by an uninformed one. Subsequently, a further extension additionally introduces decision makers which no longer act completely autonomous but imitate the trading decisions of their colleagues as long as this behavior appears beneficial to them from a rational viewpoint.

145 Therefore, the extensions investigate the superiority of the optimal economic capital allocation by further enriching the model through specific more realistic settings.

147

8

UNINFORMED CENTRAL PLANNER – INFORMATION ON THE BASIS OF BAYESIAN LEARNING

8.1 Introduction to the case of an uninformed central planner The present model so far constantly assumes a central management of the model bank in the sense of an omniscient central planner. This planner exactly knows the probabilities of success pj of each single business unit. Before this background, the planner can simulate the business activity of the model bank very precisely leading to very precise risk assessments and profit forecasts. Under these circumstances, the previous chapter finds the clear superiority of optimal economic capital allocation compared to the alternative allocation methods. In reality, however, the probabilities of success and skills respectively of the business units are more likely to be at first completely unknown by the central management. The current chapter investigates on the relevance of optimal economic capital allocation in case of a central planner not knowing the business units’ skills exactly. The superiority of highly sophisticated allocation methods might not be worth the high efforts of these methods if executed on the basis of less reliable information. The central planner has available different strategies to respond to the new situation of being uninformed. The safest but also most inefficient method consists in assessing risk by assuming all trading positions being of the same trading direction by default. Consequently, the risk assessment constantly assumes an only long or only short portfolio of the bank-wide investments generally considerably overestimating risk. Furthermore, for the profit forecast, the uninformed central planner can assume identically skilled business units featuring identic probabilities of success of e.g. pj = 0.51. However, already the restrictive risk assessment distinctly reduces the expected profit arising from the considerable reduction of the risk taking capabilities of the model bank. A less restrictive approach represents assessing risk on the basis of a trading simulation assuming pj = 0.5 for each business unit. Therefore, assuming the model’s worst case concerning the traders’ skills pj replaces the extreme worst case assumption of identic trading directions. As a result, the risk taking capabil-

148 ities of the model bank almost reach the level occurring in context with an informed central planner. The profit forecasting keeps using identic pj-values of e.g. pj = 0.51. This less restrictive approach induces limit allocations exhibiting much higher expected profits compared to assuming identic trading directions by default. Nevertheless, under this approach the optimal allocation method and also the expected return method still miss the important indications concerning the different skill levels of the business units. Therefore, the present investigation goes a step further. The uninformed central planner not only assumes the worst case in the form of setting pj = 0.5 by default. Instead, the planner rationally learns about the units’ skills and thereby successively learns to perform high quality trading simulations. The estimated probabilities of success pje enter the risk assessment as well as the profit forecasts of the model bank. The model implements this planner’s rational learning on the basis of Bayesian updating which the following chapter describes in depth.

8.2 Description of the Bayesian learning algorithm The description of the Bayesian learning algorithm at first refers to the fundamental determination of the skill levels in the basic model from chapter 4.3.3. The basic model uses an average skill level of p⌀ = 0.55. In order to estimate the single skill levels pje of the business units, the central planner discretizes these skill levels’ distribution by distinguishing between K = 1,000 types of skill levels and trader types respectively. The following uses the terms “business unit” and “trader” synonymously. Hence, the planner divides the interval [0.5, 1] of all possible p values into K equal segments ∆p by using (8.1)

∆p = 0.0005 .

This discretization enables the planner to calculate prior probabilities concerning the occurrences of the particular types of traders via formula (4.37), the cumulative distribution function (CDF). Through customization of the CDF these probabilities of occurrence of the K trader types result from 9

(8.2)

9

k   (k − 1)   θ k = 1 −  − 1 −  where k = 1, ..., K .116  1,000   1,000 

The centers of the intervals of length ∆p represent the trader types’ particular probabilities of success according to

116

See Appendix 6 for the transformation.

149

(8.3)

pk =

k 1,999 + where k = 1, ..., K .117 2,000 4,000

The graph below illustrates the result of the central planner’s discretization of the model world’s skill levels by trader types.118

θ1

p1

PP

1 − p1

LL

θk

θ2

p2

PP

1 − p2

LL



θK

1 − pk

pk

PP

LL



− ppKK

PP

1 − pK

LL

Figure 8.1: Discretization of the model world’s skill levels by trader types The planner knows the occurrences θk of the types of traders serving as prior probabilities at the start of the Bayesian updating process. Furthermore, the planner knows the types’ probabilities of success pk where P and L denote “profit” and “loss”. As a consequence, the model assumes the central planner to exactly know the actual occurrences of skill levels throughout the model world. In contrast, in reality prior probabilities commonly result from expert knowledge and educated guessing and hence do not exactly represent the actual probabilities. Now the central planner can estimate the individual skill levels of the model bank’s traders. In a first step, the planner calculates the probability concerning each trader of belonging to the respective trader types. Thereto, the planner uses the information of whether a trader achieves a profit or a loss during one trading day. The proper categorization of the traders requires a certain number of such inferences denoted by nupdates. By the law of large numbers, the estimation inevitably results in the skill level which is closest to the business unit’s actual one for nupdates → ∞. This even holds true for prior probabilities deviating from the 117 118

See Appendix 7 for the transformation. See Sharpe (1990) using Bayesian updating in a completely different context. Nevertheless, Sharpe (1990) gave the idea for the modeling of a Bayesian learning central planner within the present model.

150 actual probabilities as long as the prior probabilities’ distribution satisfies the very basic demands of the particular scenario of application. However, the stronger this deviation, the more inferences the Bayesian updating requires in order to generate useful results. In order to estimate the business units’ probabilities of success pje the central planner assigns one probability structure according to Figure 8.1 to each single business unit. The parameters of these assigned structures then undergo updates according to the trading results of the respective unit j as soon as new information about the unit’s trading result arrives. Each update of the structure’s parameters at the same time updates the estimator pje whose calculation bases on these parameters. The following gives a detailed example of the so far described processes by picking one trader j generating a profit P. In consequence of the profit, the central planner adjusts every occurrence parameter θk in the trader’s probability structure according to the formula (8.4)

θ k → prob(k P) k =

θ k pk K

∑θ

k

pk

k =1

based on the Bayes’ theorem. If the trader causes a loss L, the formula changes to (8.5)

θ k → prob(k L) k =

θ k (1 − p k ) K

∑θ

k

.

(1 − p k )

k =1

The formulas yield the picked trader’s probabilities of belonging to type k under the condition that the trader generated a profit or a loss. A follow-on update always uses the resulting occurrences of the previous update. Therefore, here the follow-on update uses prob(k|P)k and prob(k|L)k instead of θk. The focus is again on the picked trader j generating a profit P. In order to calculate his estimated skill level pje, the central planner takes the sum of all products between the types’ skill levels and their updated occurrences according to K

(8.6)

K

p ej (P) = ∑ p k prob(k P) k = ∑ p k k =1

k =1

θ k pk K

∑θ

k

.

pk

k =1

If the trader causes a loss L, the central planner estimates the skill level pje by (8.7)

K

K

k =1

k =1

p ej (L) = ∑ p k prob(k L) k = ∑ p k

θ k (1 − p k ) K

∑θ k (1 − pk ) k =1

.

151 Figure 8.2 provides exemplary results of a Bayesian updating process on the basis of nupdates = 20k inferences and three business units featuring distinctly different skill levels pj. 0.55

0.53

0.54 0.52

0.53 0.52

0.51

0.51 0.5

0.5 0

5000

10000

15000

20000

0

500

1000

1500

2000

2500

Figure 8.2: Exemplary pje-values of nupdates = 20k successive Bayesian updates (zoom on the first 2.5k pje-values on the right) for business units actually exhibiting p1 = 0.544 (petrol), p2 = 0.521 (violet) and p3 = 0.5 (pink) The exemplary results from Figure 8.2 reveal a strong bias of the estimators pje compared to the actual skill levels of p1 = 0.544 (petrol), p2 = 0.521 (violet) and p3 = 0.5 (pink) for the first 2.5k updates. From then on, the estimators at least reflect the correct ranking among the actual skill levels. This ranking represents the most important information for sophisticated economic capital allocation where the final precision of the single estimators is less decisive. Nevertheless, an increasing precision of the single estimators also potentially causes further increases in the quality level of the resulting limit allocations. This quality consists in the reliability of the limits’ risk restricting properties and at the same time on the limits’ ability of inducing bank-wide maximum profit expectations. In contrast to the basic model, the example from Figure 8.2 uses the more realistic lower average skill level of p⌀ = 0.51. Thereto, each of the 1, …, k, …, K trader types pk resulting from formula (8.3) has to be transformed into pk adjusted according to (8.8)

pk adjusted = 0.2(pk + 2).

Figure 8.3 illustrates the influence of formula (8.8) on the distribution of the different trader types. For reasons of simplification the illustration disregards the actual discreteness of these types.

152

p⌀, typesp=k 0.55 where pk,⌀ = 0.55

0.5

0.5

pk adjusted p⌀, types = 0.51 where pk,⌀ = 0.51

1

1

0.5

0.5

0.6

0.6

Figure 8.3: Illustration of the influence of formula (8.8) on the distribution of the trader types pk Therefore, the adjustment finally consists in just reusing the initially introduced PDF from Figure 4.2 on the new interval of pk-values of [0.5, 0.6]. As a consequence of this reuse all other parameters and formulas in context with the Bayesian updating algorithm can remain unmodified. During the following investigation in particular lower numbers of inferences, between 1 and 2.5k inferences, represent the most relevant ones. In terms of the current modeling 2.5k inferences require almost 10 years of trading. Higher numbers of inferences therefore become increasingly implausible. Another important point represents the fact that the current modeling always assumes a just founded model bank. The estimations concerning the business units’ skill levels all start at once. A more plausible scenario would include different starting times per unit generating additional or at least different bias. The different starting times would reflect the realistic scenario of a bank consisting of already established and just newly launched business units and also the fluctuation among the employees. However, in order to keep the modeling simple, the following investigations base on the idea of a just founded model bank. This simplified modeling fully satisfies the requirements of the present rather fundamental investigation on sophisticated economic capital allocation in case of an uninformed central planner. The Bayesian updating algorithm equips the uninformed central planner with a rational and objective procedure of estimating the potential economic capital addressees’ characteristics. Only the central availability of these characteristics enables any control of the business activity of the model bank in a strictly portfolio theoretical sense. The used Bayesian algorithm completely dispenses with agency theoretical elements. Such elements, for example truth telling mechanisms, incentive functions and solving by Nash equilibrium, often distract attention from crucial portfolio theoretical issues. Furthermore, the implementation of agency theoretical allocation schemes in practice appears at least as difficult as

153 that of the present modeling using Bayesian updating. The following therefore investigates the fundamental usefulness of optimal economic capital allocation in case of an uninformed and Bayesian learning central planner.

8.3 Bayesian learning central planner in case of independently acting decision makers 8.3.1 Benchmarking of allocation methods using perfect prior probabilities The benchmarking in case of an uninformed central planner includes the already known allocation methods from chapter 7: The optimal / TA, the expected return, the uniform and the random method. The present investigation reuses the model bank from chapter 7.2.2.4 and therefore the parameters cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k. Only the economic capital slightly changes to ec ≈ 400 in order to establish a level playing field between the TA method and the expected return method for the present investigation. The new variable nupdates represents the decisive independent variable denoting the number of updates and inferences respectively during the process of Bayesian learning of the central planner. At first, the following compares three distinctly different numbers of updates, nupdates = 250, = 1k and = 20k, in order to demonstrate the impact of the independent variable on the limit allocations’ properties. Subsequently, a whole sequence of tests using between nupdates = 1 and = 2k analyses this impact in detail for the more relevant lower values of nupdates. The stock returns during any trading simulations again stem from the GBM and refer to the historical returns of the S&P 500 index from August 9, 2010 to August 5, 2011. The simulations use the stocks’ data and the order of stocks already known from the previous investigations for the case of an informed central planner. Furthermore, the investigation’s Bayesian updating algorithm uses perfect prior probabilities and therefore priors which exactly correspond to the actual occurrences of the different trader types in the model world. On the basis of such optimal conditions the optimal allocation scheme is again expected to outperform the alternative allocation schemes, despite an uninformed Bayesian learning central planner. The current investigation also introduces a further difference between the inand out-of-sample computations and hence the generation of the limit allocations and their backtesting. The 1, …, i, …, m in-sample trading simulations for

154 the generation of the 1, …, j, …, n limits necessarily use the business units’ estimated probabilities of success pje. In contrast, the subsequent m backtestings of the limits use the units’ actual probabilities of success pj. The estimated probabilities pje themselves result from a separate and independent computation. This separate simulation of the respective number of trades just produces the required number of Bayesian inferences and updates nupdates respectively in order to be able to compute pje properly. In a first step, the investigation analyses the limit allocations’ structures of the different allocation methods in order to find out how the structures change for an increasing number of updates nupdates. To be able to illustrate these changes, the single limits have to be arranged according to the actual expected returns of the business units and hence according to their out-of-sample expected returns. This arrangement enables observing whether increases in nupdates indeed induce improvements of the resulting limit allocations. One clear indication for such improvements represents the disproportionately strong consideration of successful business units by the allocation schemes. Each of the single diagrams below arranges the limits according to the expected returns of the business units starting with the most successful unit. TA: 100

expected return:

random:

uniform:

nupdates = 250: TA

expected return

uniform

random

expected return

uniform

random

expected return

uniform

random

50

0 100

nupdates = 1k: TA

50

0 100

nupdates = 20k: TA

50

0

Figure 8.4: Limit allocations using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k, arranged according to the business units’ out-of-sample (actual) expected returns For nupdates = 250 only the limit allocations of the TA and the expected return method in the first row of Figure 8.4 reveal a weak trend of providing the successful business units with more economic capital than the less successful units.

155 The red horizontal linear trend lines visualize this trend’s extent for each diagram. In case of nupdates = 250 the estimations pje are still very imprecise. This can be, for example, observed by the limit structure of the uniform allocation exhibiting roughly the same amount of minimum limits among the successful and less successful business units. The random limit allocation’s greyish exemplary limit structure even passes more economic capital to the less successful units. Also the uniform and the random method provide business units exhibiting e.g. negative expected returns with the minimum limit vlmin. As a consequence, for very higher numbers of inferences nupdates also these methods should pass slightly more economic capital to the successful units. The increase to nupdates = 1k clearly improves the limit allocations indicated by the increasing slope of the trend lines. Indeed, also the uniform and random limit allocations exhibit a slight overweight in economic capital assignments among the successful business units. The further extreme increase of the number of updates to nupdates = 20k yields further improvements. Nevertheless, even under nupdates = 20k the allocation methods obviously fail to exclusively provide the worst business units with minimum limits at the very right ends of the diagrams. However, this fact is due to natural bias arising from the use of different random numbers during the in- and out-of-sample computations causing differences between the in- and out-ofsample expected returns of the business units. In fact, the limit allocations of the third row of Figure 8.4 very much correspond to those from chapter 7.2.2.4 under an informed central planner. Figure 7.10 just arranges the single limits according to the in-sample expected profits causing the neat forms of the limit allocations. As a consequence, further increases in nupdates cannot be expected to yield much further improvements of the limit allocations. The so far analysis of the limit allocations reveals that the successful business units receive relatively larger shares of economic capital for the case of high numbers of updates than less successful units. This, however, does not yet answers the question, whether the limit allocations of the TA method indeed induce superior profit expectations compared to the alternative methods. The following table 8.1 addresses this question by at first providing detailed information on the results of the in-sample computations concerning the three different numbers of updates nupdates.

156 Table 8.1: In-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k TA

expected return

uniform

random

n updates = 250, ec = 398: µ bank / σ µ bank

47.05

203

43.57

190

38.58

180

36.15

185

149 621

91

149 625

85

149 615

90

140 757

2 234

VAR bank / β

398

0.99

398

0.99

385

0.992

398

0.99

ES bank / σ ES

531

99

496

80

470

73

487

80

5 427 0.118

13.62 0.118

5 440 0.109

13.65 0.109

5 280 0.097

13.71 0.1002

4 968 0.091

12.47 0.091

E(c bank ) / σ c bank

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR ) n updates = 1k, ec = 399: µ bank / σ µ bank

50.72

207

46.55

193

39.50

181

36.32

186

149 265

94

149 600

86

149 652

86

141 084

2 251

VAR bank / β

399

0.99

399

0.99

384

0.992

398

0.99

ES bank / σ ES

520

106

497

84

467

74

493

93

5 447 0.127

13.66 0.127

5 458 0.117

13.69 0.117

5 311 0.099

13.84 0.103

5 006 0.0911

12.58 0.0913

E(c bank ) / σ c bank

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR ) n updates = 20k, ec = 425: µ bank / σ µ bank

55.48

221

49.92

202

37.92

183

36.33

193

149 255

101

149 664

86

149 657

89

144 939

2 360

VAR bank / β

425

0.99

425

0.99

398

0.993

419

0.9907

ES bank / σ ES Σ vl / d vl

562 5 462 0.131

115 12.86 0.131

521 5 470 0.117

89 12.88 0.117

482 5 250 0.089

77 13.21 0.095

510 5 086 0.086

91 12.13 0.087

E(c bank ) / σ c bank

E(RORAC ec ) / E(RORAC VAR )

The TA method outperforms the alternative methods for each of the different nupdates-values. This finding is very plausible since the basic conditions concerning the in-sample computations are still the same as during the previous chapters. Depending on the respective nupdates-value an adjusted value for ec ensures that the TA and the expected return method induce exactly the same values of risk VARbank. The risk level follows the maximum risk level the expected return method can realize under cbank = 150k. Thereby, the sophisticated methods also exhibit very much the same values for the expected invested capital E(cbank). In this point, however, the expected return method even slightly exceeds the values of the TA method for each of the different nupdates-values. Nevertheless, the values create an appropriate level playing field for benchmarking, at least between the most important sophisticated allocation methods. Also the remaining allocation methods exhibit relatively similar values for VARbank and E(cbank). As a consequence, the µbank-values of these methods adequately indicate the lower bound concerning the current model case’s expected profits. The remaining parameters’ results in table 8.1 appear plausible and therefore additionally confirm the correctness of the computations.

157 Comparing the current in-sample results with those from table 7.9 reveals the impact of using only estimated skill levels pje: In case of an uninformed planner and nupdates = 20k the TA method for example just achieves µbank = 55.48 instead of µbank = 60.14 with an informed central planner. Consequently, even very high numbers of updates cannot ensure as well results as in case of an informed central planner. However, the current analyses disregard testing nupdates-values > 20k. Tests confirm that also for highly precise estimations of pje there will remain differences compared to an informed central planner. At least unless the number of used trader types K by the Bayesian updating algorithm does not tend to infinity according to K → ∞ completely eliminating the bias caused by the discretization of the trader types’ distribution. After establishing an appropriate level playing field between the TA and the expected return method during the in-sample computations the following table 8.2 now provides the allocation methods’ out-of-sample results. Table 8.2: Out-of-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k TA

expected return

uniform

random

n updates = 250, ec = 398: µ bank / σ µ bank

36.69

179

36.15

167

33.84

157

31.83

161

149 521

93

149 517

84

149 462

87

140 613

2 224

VAR bank / β

403

0.9896

364

0.993

349

0.9948

356

0.9935

ES bank / σ ES

493

98

455

86

428

77

446

90

5 427 0.092

13.48 0.091

5 440 0.091

14.94 0.099

5 280 0.085

15.13 0.097

4 968 0.0799

13.97 0.0895

E(c bank ) / σ c bank

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR ) n updates = 1k, ec = 399: µ bank / σ µ bank

40.46

182

38.47

169

34.20

158

32.20

162

149 062

85

149 478

84

149 515

88

140 955

2 239

VAR bank / β

407

0.9893

375

0.9924

350

0.9944

360

0.9937

ES bank / σ ES Σ vl / d vl

500 5 447

92 13.40

458 5 458

84 14.57

431 5 311

77 15.19

442 5 006

76 13.92

E(RORAC ec ) / E(RORAC VAR )

0.101

0.0995

0.096

0.1027

0.086

0.0978

0.081

0.089

E(c bank ) / σ c bank

n updates = 20k, ec = 425: µ bank / σ µ bank

50.34

194

45.60

177

35.85

160

34.13

170

149 157

85

149 554

81

149 494

89

144 782

2 343

VAR bank / β

417

0.9907

386

0.9931

353

0.9958

382

0.9942

ES bank / σ ES

519

103

474

86

436

78

469

91

5 462 0.118

13.10 0.1207

5 470 0.107

14.17 0.1181

5 250 0.084

14.87 0.1016

5 086 0.0803

13.31 0.0893

E(c bank ) / σ c bank

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

The most important out-of-sample finding represents the confirmation of the TA method’s superiority. Also in case of an uninformed Bayesian learning central planner the optimal allocation scheme represents the most appropriate method.

158 The TA method achieves the maximum expected profit µbank-values for each different number of updates. The TA method’s superiority in expected profit µbank compared to the expected return method increases according to 1.5 %, 5.2 % and 10.4 % for the increasing numbers of updates. An interesting fact represents the inferiority of the TA method concerning E(RORACVAR) for lower nupdates. E(RORACVAR) measures the rate of the expected profit µbank per actually used ec measured by VARbank. Finally, however, E(RORACec) basing on the complete available economic capital represents the much more relevant parameter for measuring the efficiency of economic capital allocation. Nevertheless, the TA method’s loss in superiority concerning E(RORACVAR) once more emphasizes the costs of imprecise information. A crucial negative aspect, however, represents the fact that the TA method violates the confidence level β for nupdates = 250 and nupdates = 1k. Obviously, the imprecise estimations pje in case of the lower numbers of updates cause the TA method’s limit allocations to underestimate the actual risk level. Although the inappropriate pje-values exclusively cause violations in case of the TA method, this fact finally indicates the general risk underestimation by all allocation methods. Nevertheless, in particular the lower numbers of updates exhibiting the confidence level violations are highly relevant. In practice, the possibilities for gathering large amounts of inferences for Bayesian updating processes can be expected to be heavily restricted. The common scenario also includes just launched business units, not to mention the permanent fluctuation among the decision makers finally causing a reset of the respective estimations. This rough outline of a more realistic scenario already clearly suggests estimation values for decision making units’ skill levels of rather low precision to represent the common case. The following diagrams therefore enable a detailed consideration of the allocation methods’ performances for very low numbers of updates and very imprecise skill levels respectively.

159

40 TA expected return uniform

35

random

30 1

500

1000

1500

2000

0.3

expected return 0.2

0.1

0 1

500

1000

1500

2000

Figure 8.5: Out-of-sample results for µbank (upper) and the relative advantage of the TA method (lower) for the nupdates-values [1, 2k] using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k The upper diagram of Figure 8.5 reveals that even for very low numbers of updates nupdates < 250 the TA method still slightly outperforms the expected profit µbank of the expected return method. Even for nupdates = 1 and identical pje-values of 0.51 respectively the TA method outperforms the expected return, the uniform and the random method by about 2.3 %, 7.3 % and 13.5 %. In case of nupdates = 1, the methods exclusively process the differences between the capital addressees arising from their traded financial instruments’ historical returns. The diagram additionally reveals that in particular the sophisticated methods benefit from an increasing precision of the estimated skill levels pje. This is due to the fact that in particular these methods’ mechanisms consider the business units’ skill levels. The relevance of allocating economic capital sophistically increases with an increasing precision of pje. The lower diagram from Figure 8.5 exactly reports the superiority of the TA method for the successive increase of nupdates. The relative examination elimi-

160 nates the potential bias caused by the slightly varying ec-values. This diagram finally suggests that the model bank can induce roughly 1 % to 10 % higher expected profits using the optimal instead of the expected return method, depending on the precision of the central planner’s information. However, the so far very positive analysis from the perspective of the TA method also identifies crucial difficulties for the case of an uninformed Bayesian learning central planner. Table 8.2 already reveals violations of the confidence level β by the limit allocations of the optimal allocation method for rather imprecise pje-values. The following Figure 8.6 further emphasizes this problematic.

0.996 0.994

TA expected return

0.992

uniform random

0.99 0.988 1

500

1000

1500

2000

Figure 8.6: Out-of-sample results for confidence level β for the nupdates-values [1, 2k] using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k The green trend line clearly indicates improvements of the confidence level for increasing numbers of updates in case of the TA method. Nevertheless, the method constantly causes β-values < 0.99 all across the considered interval [1, 2k] of lower nupdates-values. This finding finally confirms the inappropriateness of the risk assessment on the basis of rather imprecise estimations of the probabilities of success pje. The subsequent chapter addresses the question of how to eliminate these Bayesian updating induced shortcomings of the risk assessment. 8.3.2 Benchmarking under adjusted prior probabilities for the anticipation of risk underestimation The previous chapter reveals the Bayesian learning to cause an insufficiently precise risk assessment, even for the case of perfect prior probabilities. The insufficiencies exclusively occur in context with low numbers of updates nupdates and therefore in context with imprecise estimations pje. Only for implausible

161 high numbers of updates the precision becomes high enough to prevent confidence level violations. The confidence level violations even occur under the use of perfect priors. In the common case, however, prior probabilities result from expert knowledge deviating to a certain extent from the actual probabilities. Therefore, in the present case, for too confident estimations of the prior probabilities even more confidence level violations will occur. This suggests a very conservative determination of the prior probabilities which, in the present case, represent the occurrences θk of the 1, …, k, …, K different trader types pk. Conservative prior probabilities could even anticipate or eliminate the underestimation of risk. The use of conservative priors appears more appropriate than, for example, the introduction of a further importance sampling (IS) factor. An IS factor would constantly have the same effect, even for the case of pje-values of maximum precision. This inflexibility of a further IS factor would therefore potentially cause new inefficiencies. In contrast, conservative priors can be expected to just delay reaching the level of maximum precision without completely excluding it by constant bias. Sufficiently conservative prior probabilities just have the effect of preventing any confidence level violations in case of imprecise pje -values. There only remains the question of how such conservative prior probabilities affect the superiority of the optimal allocation of economic capital. Sophisticated allocation schemes might become irrelevant before the background of conservative prior probabilities wilfully deviating from the actual probabilities. The following benchmarking of the already known TA, expected return, uniform and random allocation methods addresses this question by using correspondingly adjusted prior probabilities. The benchmarking widely reuses the configuration of the model bank from the previous chapter 7.2.2.4: ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.5 and m = 20k. Also the design of the investigation is the same for comparison reasons. Therefore, at first the analyses address the cases of nupdates = 250, = 1k and = 20k. Subsequently, the investigation focusses on lower numbers of updates nupdates corresponding to the interval [1, 2k]. However, compared to the previous investigation on the basis of perfect prior probabilities establishing a level playing field between the TA and the expected return method in the present case requires ec-values lying more distinctly above ec = 400. The tables below provide the exactly used ec-values. Besides that, the only modification consists in the application of the adjusted prior probabilities and therefore adjusted occurrences θk of the 1, …, k, …, K by transforming the trader types pk into pk adjusted according to (8.9)

pk adjusted = 0.12pk + 0.44.

162 Figure 8.7 outlines the impact of the adjustment on the basis of the types’ distribution. For reasons of simplification the outline disregards the actual discreteness of trader types pk. p⌀, types =pk0.51 where pk,⌀ = 0.51

0.5

0.5

pk adjusted p⌀ , types = 0.506 where pk,⌀ = 0.506

1

0.6

0.5

0.5

0.6

0.56

Figure 8.7: Illustration of the impact of formula (8.9) on the occurrences θk of the trader types pk on the basis of the types’ distribution Therefore, the current analysis again just reuses the initially introduced PDF from Figure 4.2 on the new interval of pk-values of [0.5, 0.56] enabling the simple reuse of the Bayesian updating algorithm from chapter 8.2 without any further modifications. As a consequence of the use of the new interval, the central planner assumes much lower occurrences θk and prior probabilities respectively for successful traders. The planner even completely disregards trader types featuring 0.56 < pk < 0.6. Thereby, the planner finally assumes very conservative prior probabilities and a very conservative average skill level for the different trader types of pk,⌀ = 0.506 while the actual average skill level lies at p⌀ = 0.51. The exact extent of the adjustment of the prior probabilities results from trial and error. In general, the appropriateness of the adjustment orientates by its capability of preventing any confidence level violations even for the case of nupdates = 1 inducing maximum imprecision of the Bayesian updating outcomes pje. In a first step, Figure 8.8 addresses the structures of the allocation methods’ limit allocations for the current situation of conservative prior probabilities.

163 TA: 100

expected return:

uniform:

random:

nupdates = 250:

50

0 100

nupdates = 1k:

50

0

100

nupdates = 20k:

50

0

Figure 8.8: Limit allocations using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k, arranged according to the business units’ out-of-sample (actual) return expectations The limit structures themselves hardly reveal any significant differences compared to the previous case of perfect prior probabilities. Nevertheless, the conservative prior probabilities cause a higher number of business units featuring negative expected returns during the in-sample computations. As a consequence, more business units only receive the minimum limit of vlmin = 12.5 while the limits of those business units featuring positive expected returns increase accordingly. This effect becomes visible by comparing the current diagram of the TA method for the case of nupdates = 250 with the corresponding diagram from the case of perfect priors. In general, however, the limit structures are very similar to those occurring under the use of perfect priors. The following table 8.3 provides the in-sample results of all methods concerning the three different numbers of updates nupdates of the current analysis.

164 Table 8.3: In-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k TA

expected return

uniform

random

n updates = 250, ec = 445: µ bank / σ µ bank

32.14

216

29.63

198

24.42

181

23.49

195

149 639

92

149 655

84

149 651

87

147 112

2 095

VAR bank / β

445

0.99

445

0.99

407

0.993

438

0.9906

ES bank / σ ES

593

130

548

98

501

84

543

108

Σ vl / d vl

5 456

12.26

5 464

12.28

5 265

12.93

5 176

11.82

E(RORAC ec ) / E(RORAC VAR )

0.072

0.072

0.067

0.067

0.055

0.06

0.053

0.054

E(c bank ) / σ c bank

n updates = 1k, ec = 437: µ bank / σ µ bank

35.62

216

32.52

198

26.57

182

26.26

193

149 405

94

149 648

87

149 643

87

145 692

2 277

VAR bank / β

437

0.99

437

0.99

403

0.994

435

0.9901

ES bank / σ ES

585

126

539

96

492

83

542

102

Σ vl / d vl

5 454

12.49

5 471

12.53

5 280

13.09

5 141

11.81

E(RORAC ec ) / E(RORAC VAR )

0.082

0.082

0.074

0.074

0.061

0.066

0.0601

0.0603

E(c bank ) / σ c bank

n updates = 20k, ec = 430: µ bank / σ µ bank

51.46

218

47.03

203

35.59

183

34.24

195

148 955

93

149 663

86

149 662

89

145 205

2 345

VAR bank / β

430

0.99

430

0.99

402

0.993

429

0.9901

ES bank / σ ES

560

114

530

93

486

78

523

89

5 460

12.71

5 474

12.74

5 254

13.08

5 099

11.89

0.12

0.12

0.109

0.109

0.083

0.089

0.0797

0.0798

E(c bank ) / σ c bank

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

Most important outcome of the in-sample computations represents the finding that the TA method still clearly dominates the alternative methods. Compared to the perfect priors, the adjusted priors cause considerable reductions in µbank, particularly for the low numbers of updates nupdates = 250 und = 1k. In case of the TA method these reductions are 31.2 % und 29.8 %. However, such reductions do not occur unexpectedly and therefore just confirm the correctness of the computations. For nupdates = 20k the reduction considerably diminishes, in case of the TA method, for example, to 7.3 %. The latter confirms the possibility of achieving similarly high results for µbank as in case of perfect priors if the number of updates is high enough. In contrast, the use of importance sampling would suppress this convergence of the results even for high numbers of updates. Most relevant, however, is the question of how the conservative prior probabilities affect the out-of-sample results since the out-of-sample computations finally represent the actual use of the limit allocations. This actual use includes, besides different random numbers, the actual skill levels pj of the traders instead of the estimated skill levels pje. Table 8.4 provides the corresponding out-of-sample results.

165 Table 8.4: Out-of-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k TA

expected return

uniform

random

n updates = 250, ec = 445: µ bank / σ µ bank

38.38

191

36.54

174

34.54

159

33.25

172

149 570

89

149 552

85

149 492

86

146 956

2 082

VAR bank / β

425

0.991

384

0.995

355

0.997

380

0.9948

ES bank / σ ES

531

116

478

91

432

78

469

83

5 456 0.086

12.83 0.09

5 464 0.082

14.22 0.095

5 265 0.078

14.82 0.097

5 176 0.075

13.62 0.088

E(c bank ) / σ c bank

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR ) n updates = 1k, ec = 437: µ bank / σ µ bank

40.56

191

38.62

174

34.87

159

33.79

170

149 328

83

149 528

85

149 490

87

145 544

2 264

VAR bank / β

429

0.991

388

0.994

352

0.997

370

0.9952

ES bank / σ ES

535

109

475

87

431

75

460

83

Σ vl / d vl

5 454

12.70

5 471

14.10

5 280

15.01

5 141

13.90

E(RORAC ec ) / E(RORAC VAR )

0.093

0.094

0.088

0.0995

0.08

0.099

0.077

0.091

E(c bank ) / σ c bank

n updates = 20k, ec = 433: µ bank / σ µ bank

50.09

192

45.72

178

35.90

160

34.33

171

148 782

83

149 553

81

149 494

89

145 043

2 331

VAR bank / β

419

0.991

387

0.993

356

0.996

376

0.9951

ES bank / σ ES

514

91

475

86

436

76

461

82

Σ vl / d vl

5 460

13.04

5 474

14.14

5 254

14.76

5 099

13.56

E(RORAC ec ) / E(RORAC VAR )

0.117

0.1196

0.106

0.1181

0.084

0.101

0.0799

0.091

E(c bank ) / σ c bank

Despite the use of the conservative prior probabilities the TA method again constantly induces the highest expected profit µbank. Similar to the case of perfect priors this superiority of the TA compared to the expected return method increases according to 5 %, 5 % and 9.6 % for the increases in the number of updates nupdates. An interesting fact represents that the out-of-sample results reveal more or less as high expected profits µbank as in case of the perfect priors. This even does not only apply for very high nupdates-values but also low ones. In particular the latter fact indicates the minor role of the single skill levels’ correctness compared to the estimation of their size relations and rankings. Although perfect priors induce more precise single skill levels the expected profit µbank exhibits very little differences for the case of conservative priors. This is due to the rather constant quality level of the ranking of the business units’ skill levels independent of whether using perfect or conservative priors. The conservative priors, however, additionally prevent any confidence level violations. These findings very much suggest the effectiveness of optimal economic capital allocation, even in case of an uninformed central planner using imprecise information resulting from Bayesian learning.

166 The following investigates on the apparent superiority of the TA method particularly focusing on the lower numbers of updates nupdates from the interval [1, 2k].

40 TA expected return uniform

35

random

30 1

500

1000

1500

2000

0.3

0.2

0.1

0 1

500

1000

1500

2000

Figure 8.9: Out-of-sample results for µbank (upper) and the relative advantage of the TA method (lower) for the nupdates-values [1, 2k] using roughly ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k The upper diagram displays the expected profits µbank of the methods’ limit allocations. Also in case of conservative prior probabilities in particular the sophisticated methods benefit from higher nupdates-values. The sophisticated methods exhibit distinctly positive trends concerning their expected profits for increasing numbers of updates. Nevertheless, in case of the TA method, this trend is obviously more volatile compared to the use of perfect priors up to nupdates = 1k. The µbank-values of the TA and the expected return method even match for nupdates = 1, = 200, = 550, = 600 und = 750. The volatility of the TA method’s µbank-values arises from the conservatively adjusted priors which naturally demand for higher numbers of updates in order to achieve high quality results by the Bayesian

167 learning. This fact also causes the improvement of the uniform and random methods’ µbank-values being even weaker compared to the case of perfect priors. The lower diagram of Figure 8.9 provides the relative advantage of the TA method’s µbank -values compared to the alternative methods. This illustration does not suffer from bias finally arising to a certain extent from the slightly varying ec-values of the single computations. Despite the volatile µbank-values of the TA method for the range 1 < nupdates < 1k, this method still outperforms the expected return method by 4.6 % in average. In case of nupdates = 1, however, the differences concerning the TA and the expected return methods’ expected profits become marginal. As a consequence, for extremely imprecise information, the central planner could replace the TA method by the expected return method or even the uniform method without too high losses. Nevertheless, the expected profit of the TA method never falls below that of the expected return method. Finally, the findings clearly suggest the optimal allocation of economic capital as long as the central planner’s information satisfies a certain minimum level of precision. Then, also in the current case of an uninformed Bayesian learning central planner, the TA method’s expected profit distinctly outperform those of the alternative allocation schemes. The following diagram provides the confidence levels induced by the different allocation methods.

0.996 TA expected return

0.994

uniform random

0.992

0.99 1

500

1000

1500

2000

Figure 8.10: Out-of-sample results for confidence level β for the nupdates-values [1, 2k] using ec ≈ 400, cbank = 150k, nunits = 200, p⌀ = 0.51, pk,⌀ = 0.506, vlmin = 12.5 and m = 20k The diagram confirms the conservative prior probabilities to successfully anticipate the risk underestimation experienced under the use of perfect prior probabilities. The diagram reveals no confidence level violations of the TA method

168 across the whole interval [1, 2k] of lower nupdates-values. For increasing numbers of updates the actual confidence level of the TA method slightly converges towards the feasible minimum confidence level of 99 %. The same finally applies to the confidence levels of the alternative methods. Nevertheless, these methods’ limit allocations generally induce inefficiently high confidence levels during the “real” trading and the out-of-sample computations respectively. These high confidence levels occur despite the implementation of a level playing field between the TA and the expected return method and therefore exactly the same risk levels VARbank during the in-sample computations of these two methods. Finally, also the current model case confirms the general superiority of allocating economic capital according to the optimal allocation scheme.

8.4 Influence of herding decision makers on optimal economic capital allocation 8.4.1 Herding and informational cascades in case of economic capital allocation The so far analyses assume the traders to take trading decisions exclusively on the basis of their individual information and skills. As a result, the decisions remain completely uncorrelated. The independent building of long and short positions among the business units causes the returns of the units to exhibit positive and negative correlation coefficients close to zero. As a consequence of the unrestricted building of short positions, in contrast to the correlations between the units’ returns, the correlations between the stocks’ returns even become completely irrelevant. With respect to the practice, however, the exclusive trading on the basis of individual information represents a very unrealistic scenario leading at the same time to very unrealistic correlation coefficients. In practice, traders commonly observe the trading activity of their colleagues up to a certain extent and thereby inevitably draw conclusions for their own position building. The information concerning the trading positions of the colleagues manifests in a market trend. Many long positions, for example, indicate an upward trend concerning the general movement of the stock prices, many short positions indicate the opposite. Further traders could just join the particular trend instead of relying on their individual information. This process, known by the relevant literature as herding

169 and informational cascades, inevitably causes the bank-wide portfolio to become unbalanced.119 Such unbalances, however, are not necessarily disadvantageous. Even the opposite can be the case: Traders orientating by very successful colleagues might trigger an informational cascade passing on high quality information. As a consequence, the remaining traders could be more likely to anticipate the market right than without herding. Nevertheless, also successful traders can fail. Or just less successful traders start the position building and thereby influence the formation of opinion among the remaining traders. For this scenario, the positive effects of herding turn into negative. Only very successful and independently acting traders might be able to interrupt a negative informational cascade at the cascade’s very beginning. By constantly acting according to their individual information, such traders provide the remaining traders with new information potentially slowing down or even turning around the cascade. Finally, the herding of the traders distinctly increases the volatility of the bank-wide profits and thereby also the bank-wide risk level. The tendency to herd represents a central characteristic of participants in financial markets. As a consequence, herding also plays a central role in context with economic capital allocation among such participants in the form of employees of a particular financial institution. Modeling the allocation of economic capital in banking should therefore urgently consider herding issues. For that reason, this and the following subchapters address the impact of herding on the relevance of highly sophisticated economic capital allocation. The upcoming investigation examines whether the optimal allocation method is still effective in case of herding. On the one side, the increase in uncertainty arising from the traders’ tendency to herd might cause highly sophisticated allocations to hardly outperform uniform or random allocations. On the other side, the herding can be expected to cause distinctly positive correlation coefficients between the business units’ returns. This potentially provides a supporting setting for the TA method which, in contrast to the alternative allocation methods, extensively uses hedging strategies in order to create highly sophisticated limit allocations. Before examining the impact of herding in depth, the following subchapter provides a detailed description of the implementation of herding and informational cascades in the present model.

119

See Burghof and Sinha (2005) also investigating herding in the context with economic capital allocation by VAR limits but without considering optimization issues. See, furthermore, the references on herding and informational cascades therein.

170 8.4.2 Modeling of herding tendencies among the decision makers For the position building a trader can use two different sources of information: On the one side, his individual information manifesting in the model by his probability of success pj and on the other side, his information about the market trend qj.120 The qj-value corresponds to the share of positive returns among the model bank’s traded stocks during one trading day. In the model, these stocks at the same time represent the complete capital market. However, the model assumes the traders to estimate these values according to pje and qje. In case of the market trend this appears very plausible. However, also in case of the individual information a trader cannot be expected to know his probability of success precisely. This particularly becomes obvious by considering employees just starting their career. Despite this example, also experienced traders can hardly be expected to exactly know their probability of success. As a consequence, each trader uses the information in the form of the estimation values pje and qje for position building. The pje -value exactly corresponds to the probability of success the central planner identifies for the respective trader by Bayesian learning. Therefore, trader j is expected to apply the same Bayesian learning in order to rationally estimate his colleagues’ and his own skill level. Before explaining the determination of qje in depth the following at first gives insight to the position building process on the basis of the two estimation values pje and qje. The traders turn away from relying on their own skills and just build a long position L in case of (8.10)

p ej < q ej ⇒ L.

The traders also turn away from using their own skills and just build a short position S according to (8.11)

p ej < 1- q ej ⇒ S.

Instead, the traders build a long or short position L or S according to pj for the cases (8.12)

p ej > q ej ⇒ L or S according to pj

(8.13)

and p ej > 1- q ej ⇒ L or S according to pj.

Note again, that by pje trader j estimates his probability of success and by qje the market trend representing the probability of any stocks’ return in the model world to turn up. 120

The fundamental use of herding and informational cascades in the present model follows that of Burghof and Sinha (2005). Nevertheless, the technical implementation of herding within the present model differs from that of Burghof and Sinha (2005) and follows an own conception instead.

171 Nevertheless, this still does not explain the traders’ determination of the estimation value qje. Also qje results from a Bayesian learning process executed by every single trader ex ante to the position building. Thereto, each trader uses the identic prior probabilities ψ concerning the occurrences of particular market trends q. The priors ψ arise from the beta PDF with the shape parameters α = 2 und β = 2 on the interval [0, 1]:121 (8.14)

f (q ) = 6q (1 − q ) .122

18E-04 15E-04 13E-04 10E-04 08E-04 05E-04 03E-04 0 000E+00 0.00

0.25

0.50

0.75

1.00

Figure 8.11: Distribution of the market trends q on the basis of a beta PDF with α = 2, β = 2 and on the interval [0, 1] The characteristics of the distribution of q are (8.15)

q ~ B(2, 2) where [0, 1], µ = 0.5, σ = 0.226 and xmod = 0.5.

Thereby, the prior probabilities ψ orientate by the actual occurrences of the trends q within the model. These actual occurrences refer to the GBM returns of the trading simulations in the in- and out-of-sample computations.123 Figure 8.12 provides the histogram and the CDF concerning these actual trend occurrences. A trend results from the 200 returns of the 200 business units’ traded stocks on a daily basis. Histogram and CDF base on m = 20k trend simulations where 1, … , i … , m. For example one day with 150 positive and 50 negative returns results in qi = 0.75. 121

See e.g. Johnson, Kotz and Balakrishnan (1995), pp. 210-211 for a description of the generalized beta density function. 122 See Appendix 8 for the transformation of the general beta PDF by plugging in the current shape and interval parameters. 123 The current investigation uses the same GBM-returns already introduced in context with the previous investigations. For details see the remarks on these returns from the chapters 7.2.1 and 8.3.1.

172

700 600 500 400 300 200 100 0.03 0.08 0.13 0.18 0.23 0.28 0.33 0.38 0.43 0.48 0.53 0.58 0.63 0.68 0.73 0.78 0.83 0.88 0.93 0.98

0

actual trend occurrences

prior probabilities ψψ

1 0.75 0.5 0.25 0 0

0.25

0.5

0.75

1

Figure 8.12: Histogram (upper) and CDF (lower) of actual trend occurrences on the basis of the GBM-returns using m = 20k compared to the CDF of the prior probabilities ψ The lower diagram of Figure 8.12 enables comparing the actual trend occurrences with those assumed by the prior probabilities ψ. The diagram reveals minor deviations representing the common case for using prior probabilities in context with Bayesian updating. Finally, the priors enable an execution of Bayesian learning by the traders generating appropriate estimation values qje. In order to generate a trend’s estimation value qje each trader j discretizes the prior probabilities ψ according to 1, … , k … , K trend types qk occurring with ψk where K = 1,000 and (8.16)

∆q = 0.0005 .

173 The centers of the intervals of length ∆q finally represent the single trend types qk according to (8.17)

qk =

1  1  124 k − . 1,000  2

In order to compute the occurrences ψk of the single trend types qk the following formula at first introduces the CDF of B(2, 2): (8.18)

F ( p) = −2q 3 + 3q 2 .125

Subsequently, the CDF undergoes a transformation to yield the occurrences ψk according to 3

(8.19)

2

3

2

 (k − 1)  126  (k − 1)   k   k  ψ k = −2  .  − 3  + 2  + 3  1,000   1,000   1,000   1,000 

The traders’ use of the information arising from the trading decisions of the colleagues implicitly assumes the trades to proceed successively. Nevertheless, for the correct modeling already the imagination of infinitesimally differing execution times of the trades satisfies this requirement. The model determines the order of the trades on a random basis. Only the very first traders within these trading orders do not execute Bayesian learning concerning the market trend since no trades take place in advance they could draw conclusions from. The first traders fully rely on their individual skills pj. However, already the second trader uses Bayesian learning in order to estimate the market trend. Nevertheless, he only has available the information arising from one single trading decision of the first trader. The third trader can use the trading decisions of the first and second trader et cetera. In order to describe the computational implementation of the rational learning process in detail, the following exemplarily picks the second trader exclusively using the information arising from the trading decision of the first trader. Figure 8.13 illustrates the probability structure for the Bayesian learning concerning the market trend.

124

See Appendix 9 for the transformation. See Appendix 10 calculating the CDF of B(2, 2) for the current interval of [0, 1]. 126 See Appendix 11 for the customization of the CDF in order to calculate the trend types’ occurrences ψk. 125

174

ψ1

q1

p ej−1

1 − p ej−1

L SS

ψk

...

1-q1

p ej−1

1 − p ej−1

S LS L

...

1-qk

qk

1 − p ej−1

p ej−1

ψK

L SS

p ej−1

1 − p ej−1

qK

p ej−1

S LS L

1 − p ej−1

LL SS

1-qK

p ej−1

1 − p ej−1

SL LS

Figure 8.13: Probability structure for the Bayesian learning concerning the market trend As any other trader also the second trader in the trading sequence uses the prior probabilities ψk according to Figure 8.13 determining the occurrences of the trend types qk in the model world. The second trader then updates the prior probabilities ψk under the condition of the position building of the first trader. This updating demands for the first trader’s probability of success p1. As already mentioned, within the model traders also use the Bayesian learning algorithm from 8.2 to estimate the skill levels of their colleagues. Therefore, in the current exemplary case, the second trader uses the skill level p1e for the updating. The Bayesian learning concerning the colleagues’ skills requires the trading activity being completely observable by every trader. However, in context with herding investigations a certain transparency represents a fundamental assumption anyway. Proceeding with the example of estimating the current market trend in the form of q2e, the second trader now updates the prior probabilities ψk by (8.20)

ψ k → prob(k L) k =

ψ k (q k p ej−1 + (1 − q k )(1 − p ej−1 )) K

∑ψ

k

(q k p ej−1 + (1 − q k )(1 − p ej−1 ))

k =1

if the first trader builds a long position. Regardless of the current example, the formulas generally use the previous trader j-1’s trading direction. Note that, in context with the present model case of herding, the index variable j does not denote a constant but randomly changing trading rank in the trading order of all

175 traders. In the present model case, this order changes randomly for each iteration of any MC simulation executed by the in- and out-of-sample computations. In contrast, if the first trader builds a short position, the update changes to (8.21)

ψ k → prob(k S) k =

ψ k (q k (1 − p ej−1 ) + (1 − q k ) p ej−1 ) K

∑ψ k (qk (1 − p ej−1 ) + (1 − qk ) p ej−1 )

.

k =1

By the formulas, the second trader finally computes the likelihood of the actual market trend being of type k under the condition of the first trader’s trading direction. The second trader then estimates the market trend by K

K

(8.22)

q ej (L) = ∑ q k prob(k L) k =∑ q k k =1

k =1

ψ k (q k p ej−1 + (1 − q k )(1 − p ej−1 )) K

∑ψ

k

(q k p ej−1 + (1 − q k )(1 − p ej−1 ))

k =1

if the first trader builds a long position. In contrast, in case of a short position, the update changes to K

(8.23)

K

q ej (S) = ∑ q k prob(k S) k =∑ q k k =1

k =1

ψ k (q k (1 − p ej−1 ) + (1 − q k ) p ej−1 ) K

∑ψ

k

.

(q k (1 − p ej−1 ) + (1 − q k ) p ej−1 )

k =1

Regardless of the example concerning the second trader, in general a trader j always executes j-1 updates according to the formulas (8.20) to (8.23). Thereby, any information from the so far trading decisions enters the decision making of trader j. The adjusted prior probabilities prob(k|L)k and prob(k|S)k of each j-1 update serve as a starting point for the follow-on update. The last trader in the trading sequence therefore executes nunits-1 updates. As soon as a trader knows qje, the trader builds a long or short position according to the formulas (8.10) to (8.13). If the traders start following their estimations concerning the market trend, little new information enters the creation of the bank wide portfolio. This tendency increases by every additional market trend follower until relatively high values for qje completely prevent any new information to enter the process. The relevant literature describes this situation as an informational cascade.127 Figure 8.14 illustrates such informational cascades on the basis of the development of qje during the trading of the model bank. The example uses the model bank configuration already introduced during chapter 8.3.1. The important facts here represent nunits = 200, p⌀ = 0.51 and nupdates, p = 250.

127

See Burghof and Sinha (2005) for issues on informational cascades and the corresponding references therein.

176

0.57 0.55 0.53 0.51 0.49 0.47 0.45 0

50

100

150

200

Figure 8.14: Informational cascades on the basis of the development of qje during the trading of the model bank The diagram provides three exemplary cascades for three different trading days. In order to be able to examine the developments of the market trend expectations, each curve arranges the qje -values according to the particular trading order of the respective trading day. The corridor between the dotted lines marks the range where the strength of the expected market trends qje still lies below the average probability of success p⌀. As soon as qje breaks through the upper or lower bound of this corridor, the informational cascade commonly starts during the next trades and becomes unstoppably strong. Therefore, in case of the petrol curve, the traders already start herding from trader 75 on. The herding manifests in the exclusive building of long positions by the following traders. In case of the purple line, the upward cascade only starts around the trader 160. The pink curve finally represents an example where the herding causes the traders to exclusively build short positions from about trader 100 on. The stronger the herding tendency, the earlier the informational cascades start. The example from Figure 8.14 uses a herding tendency of 25 %. This means that only every fourth trader potentially herds. However, the model constantly randomizes the potentially herding traders preventing a classification into potentially herding and non-herding traders. The following chapter uses the herding model in order to investigate the impact of herding on the relevance of optimal economic capital allocation. 8.4.3 Benchmarking of allocation methods under herding decision makers After the introduction to the phenomenon of herding and the implementation of herding in context with the present model, the following addresses how the consideration of herding influences the different allocation schemes. These schemes represent again the TA, the expected return, the uniform and the random allocation method. The basic proceeding of the benchmarking is identic to that of the

177 previous benchmarking studies: At first, the central planner creates the limit allocation by the respective allocation method on the basis of the in-sample computations. Thereby, the planner aims at optimally anticipating the stock price movements and the decision making by the business units concerning the upcoming trading day. The present model case thereby additionally takes into account the potential herding of the traders. Subsequently, the out-of-sample computations test the limit allocations on the basis of the “actual” trading activity of the model bank. The differences of the out-of-sample compared to the in-sample computations again consist in different random number sequences and the traders acting according to their actual probabilities of success pj instead according to the corresponding estimations pje.128 The present model case, however, increases the impact of these differences by extending the potential behavior of the traders through taking into account herd behavior generally increasing uncertainty. Nevertheless, the following investigations dispense with assuming bias between the estimated intensities of the herd behavior during the in-sample-computations and the actual intensities of the herd behavior during the out-of-sample computations. Although such bias would be plausible, there arises the question concerning the bias’s strength a corresponding investigation should assume. In order to prevent subjective settings, the investigation could take into account many different strengths of the bias. For reasons of simplification, however, the present investigation assumes the identic herding intensities during the in- and outof-sample computations satisfying the requirements of the present very basic investigation of the impact of herding on the superiority of the optimal allocation of economic capital. The bias between the in- and out-of-sample computations thereby exclusively arises from the interaction of the additional uncertainty arising from the herding with the uncertainty arising from the estimation values concerning the traders’ skill levels. The investigations assume a herding tendency of 25 % and therefore only one of four traders potentially herds. The investigation uses the following configuration of the model bank: ec ≈ 1k, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k. In case of the current investigation additionally to the available economic capital ec also the available investment capital cbank varies slightly. The variation of cbank also helps establishing a level playing field between the TA and the expected return method representing the most sophisticated and promising allocation methods. The tables below provide the exact values of the methods’ use of economic capital

128

Note, that the separate estimation of the decision makers’ skill levels pj by pej does not consider herd behavior for reasons of simplification. As a consequence, the analysis refers to a model bank facing herding for the very first time.

178 and invested capital in the form of the methods’ actually induced risk levels VARbank and their average invested capital E(cbank). The investigations again distinguish between three distinctly different quality levels of Bayesian learning concerning the traders’ skills: nupdates = 250, = 1k and = 20k. Additionally, the current investigations assume these Bayesian updating processes to process perfect prior probabilities. Tests reveal no need for conservative adjustments of the priors according to chapter 8.3.2 in case of herding. In a first step, the following examines the methods’ limit allocations. TA:

expected return:

uniform:

random:

nupdates = 250: 200 150 100 50 0

nupdates = 1k: 200 150 100 50 0

nupdates = 20k: 200 150 100 50 0

Figure 8.15: Limit allocations using ec ≈ 1k, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k, arranged according to the business units’ out-of-sample (actual) return expectations Despite the consideration of the herd behavior of the traders, Figure 8.15 reveals very similar limit structures compared to the so far analyses which disregard herding. With an increasing precision of the information, the allocation methods increasingly assign the economic capital to the most successful business units. Each diagram arranges the business units’ limits according the units’ actual expected profits starting by the most successful ones. Only the limit allocation of the TA method exhibits higher single limits compared to the last investigation from chapter 8.3.2 without the consideration of herding. This is due to the distinctly higher available economic capital ec of the current investigation.129 The 129

The inflexibility of the expected return method prevents using the same values for ec und cbank as in case of the previous analyses without herding. This is due to the fact that herding considerably increases the general

179 higher risk level under herding demands a distinctly higher value for ec in order to keep a similarly high average invested capital E(cbank) as experienced in context with the investigations from the previous chapters. In contrast to the alternative allocation methods, however, the TA method manages to a certain extent to anticipate the higher risk by hedging strategies and uses the surplus in economic capital for much less diversified limit allocations improving the method’s expected profit. The table below provides the detailed in-sample results of the allocation methods. Table 8.5: In-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 1000, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k TA

expected return

uniform

random

n updates = 250, ec = 993, c bank = 150k: µ bank / σ µ bank

67.70

456

59.10

427

49.06

395

48.69

149 578

109

149 666

88

149 663

93

149 620

118

VAR bank / β

993

0.99

993

0.99

934

0.993

963

0.9920

ES bank / σ ES

1 253

201

1 192

195

1 120

181

1 146

189

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

5 590 0.068

5.63 0.068

5 591 0.06

5.63 0.06

5 295 0.049

5.67 0.053

5 293 0.049

5.50 0.051 400

E(c bank ) / σ c bank

403

n updates = 1k, ec = 998, c bank = 148,542: µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

74.96

460

65.53

429

53.68

393

53.91

148 102

111

148 216

91

148 152

93

148 145

106

998

0.99

998

0.99

922

0.994

944

0.9923

1 270 5 525 0.075

236 5.54 0.075

1 199 5 536 0.066

193 5.55 0.066

1 106 5 245 0.054

184 5.69 0.058

1 142 5 245 0.054

185 5.56 0.057 394

n updates = 20k, ec = 982, c bank = 150k: µ bank / σ µ bank

80.05

456

68.77

424

53.04

385

52.77

149 590

102

149 676

89

149 671

94

149 641

107

VAR bank / β

982

0.99

982

0.99

929

0.993

932

0.9921

ES bank / σ ES Σ vl / d vl

1 269 5 632

235 5.73

1 188 5 604

200 5.71

1 099 5 292

179 5.70

1 133 5 291

193 5.68

E(RORAC ec ) / E(RORAC VAR )

0.082

0.082

0.07

0.07

0.054

0.057

0.053

0.057

E(c bank ) / σ c bank

Table 8.5 reveals the TA method to even clearly outperform the alternative methods though taking into account the phenomenon of herding. The expected profits of the TA method exceed those of the expected return method by 14 % to 16 % compared to 8.5 % to 9.5 % without herding. More relevant, however, is the performance of the limit allocations during the actual trading and therefore

risk level induced by the limit allocations. Therefore, if the allocation method cannot compensate the higher risk through hedging strategies, ec has to increase. Alternatively, cbank could decrease.

180 during the out-of-sample computations addressed below. Furthermore, table 8.5 reveals very similar initial conditions for the TA and the expected return methods concerning the average invested capital E(cbank) and the risk level VARbank. This establishes optimal conditions in order to compare these sophisticated methods’ results during the out-of-sample computations. The RORACs of all limit allocations clearly decrease during the current in-sample analysis emphasizing the difficult conditions under herding. This effect is expected to increase during the out-of-sample analysis. The difficult conditions under herding in particular manifest in unbalanced bank-wide portfolio compositions. Figure 8.16 illustrates this unbalance on the basis of the distribution of the share of long positions of the portfolios.130 0.12 0.1 0.08 0.06 0.04 0.02 0 0.2

0.3

0.4

0.5

0.6

0.7

0.8

Figure 8.16: Histograms concerning the shares of long positions for the herding tendencies of 25 % (black) and 0 % (dotted) for the exemplary case of nupdates = 1k In case of 25 % herding tendency balanced portfolios with 50 % long positions hardly occur. The splitting of the probability mass in case of herding becomes more and more distinct for increasing herding tendencies among the traders. At 100 % herding tendency, the probability mass completely halves moving towards the bounds of the interval [0, 1]. As a consequence, the respective portfolios become extremely homogeneous concerning the trading directions of their single positions.

130

See Burghof and Sinha (2005) p. 64 emphasizing the impact of herding in context with their model on the basis of a similar diagram. In contrast to the present model, the herding-model of Burghof and Sinha (2005) dispense with issues on the optimization of limit allocations.

181

TA

expected return

uniform

random

0.09

0.06

0.03

0 -1500

-1000

-500

0

500

1000

1500

Figure 8.17: Histograms of the allocation methods’ out-of-sample results concerning plbank for the herding tendencies of 25 % and 0 % (dotted) for the exemplary case of nupdates = 1k Figure 8.17 illustrates the impact of herding on the distribution of the model bank’s profits plbank. The unbalance of the position building by the traders causes the risk level to considerably increase if the average invested capital of E(cbank) remains unmodified at ≈ 150k. The histograms visualize this effect by the redistribution of the probability mass from the middle to the tails of the distributions in case of herding. However, there is no splitting of the probability mass in the sense of Figure 8.16. Even in case of 100 % herding tendency the GBM returns of the present model still guarantee returns close to zero to be most likely. A splitting would require many trading days featuring almost exclusively up- or downward stock price movements. In case of the present model configuration, however, such trading days hardly occur as Figure 8.12 illustrates.131 An additional balancing effect helping to prevent the splitting of the probability mass represents the use of the minimum limits vlmin ensuring that all business units constantly take part in the trading. Nevertheless, herding tendencies above 25 % cause the fat tails to become even fatter. The subsequent table providing the out-of-sample results reveals how successful the different allocation methods finally anticipate the difficulties arising from the traders’ herd behavior during the actual trading.

131

See Burghof and Sinha (2005) p. 62 for a diagram on the return distribution under herding. Their diagram shows the splitting of the probability mass for increases in the correlations of the stock prices.

182 Table 8.6: Out-of-sample results for nupdates = 250 (upper), = 1k (middle) and = 20k (lower) using ec ≈ 1000, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k TA

expected return

uniform

random

n updates = 250, ec = 993, c bank = 150k: µ bank / σ µ bank E(c bank ) / σ c bank

48.18

406

47.33

379

43.63

351

43.76

357

149 590

106

149 616

84

149 518

84

149 475

124 0.9961

VAR bank / β

963

0.99

889

0.99

824

0.996

835

ES bank / σ ES

1 185

252

1 077

205

992

174

1 015

190

Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

5 590 0.049

5.80 0.05

5 591 0.048

6.29 0.053

5 295 0.044

6.42 0.053

5 293 0.044

6.34 0.052

n updates = 1k, ec = 998, c bank = 148,542: µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

55.08

406

52.28

377

46.89

346

47.01

353

148 002

96

148 137

83

148 008

84

148 002

115

969

0.99

898

0.99

817

0.997

841

0.9962

1 197 5 525 0.055

207 5.70 0.057

1 086 5 536 0.052

202 6.16 0.058

983 5 245 0.047

176 6.42 0.057

1 011 5 245 0.047

189 6.24 0.056

n updates = 20k, ec = 982, c bank = 150k: µ bank / σ µ bank E(c bank ) / σ c bank VAR bank / β ES bank / σ ES Σ vl / d vl E(RORAC ec ) / E(RORAC VAR )

75.03

401

65.44

372

54.11

339

54.95

345

149 473

83

149 626

81

149 534

85

149 502

113

940

0.99

873

0.99

798

0.997

809

0.9968

1 150 5 632 0.0764

204 5.99 0.0798

1 054 5 604 0.067

191 6.42 0.075

962 5 292 0.055

173 6.64 0.068

970 5 291 0.055

178 6.54 0.068

Also during the actual trading the limit allocations of the TA method perform best despite the consideration of herding. Depending on the precision of the estimations pje and number of updates nupdates respectively, the TA method’s limit allocations reach about 1.8 %, 5.4 % and 14.7 % higher profit expectations compared to the limit allocations of the expected return method. These results resemble those from the analyses without the consideration of herding. The extent of the superiority of the TA method still reaches its maximum for nupdates = 20k. The findings for the current herding tendency of 25 % therefore confirm the so far findings of allocating economic capital optimal in a portfolio theoretical sense being the superior allocation scheme. Nevertheless, the difficulties of considering herd behavior manifest in the distinctly lower values for E(RORACec). In this context, the TA and the expected return method achieve 7.6 % and 6.7 % with, and 11.7 % and 10.6 % without the consideration of herding. Also under the consideration of herding the precision of information is highly relevant. The bias arising from the herd behavior does not neutralize or eliminate this relevance. The following diagrams provide a detailed examination of the methods’ performances for less precise information.

183

60 55 TA expected return

50

uniform random

45 40 1

500

1000

1500

2000

0.3

0.2

0.1

0 1

500

1000

1500

2000

-0.1

Figure 8.18: Out-of-sample results for µbank (upper) and the relative advantage of the TA method (lower) for the nupdates-values [1, 2k] and 25 % herding tendency using ec ≈ 1k, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k In a first step, Figure 8.18 examines the µbank-values of the methods by the upper diagram. The herding causes the TA method to perform worse for less precise information and therefore nupdates < 1k. In particular for nupdates < 250 the TA method hardly outperforms the expected return method or even underperforms it. From about nupdates = 250 on, however, the TA method starts to achieve distinctly better µbank-values compared to the expected return method. The lower diagram just displays the superiority of the TA method compared to the alternative methods. The superiority-diagram does not suffer from any bias caused by the slightly varying initial conditions concerning ec and cbank. The blue curve reveals the benefit of the TA method compared to the expected return method. This benefit partly even turns into a disadvantage for very imprecise information. This weakness of the TA method is due to the current analysis’s consideration of herding. Nevertheless, also under the consideration of herding, the TA method’s performance improves for increasingly precise information. From a certain level of precision on, here about nupdates = 250, the TA method clearly

184 represents the superior method. From about nupdates = 1k on, the superiority of the TA method even exceeds that experienced during the analyses without the consideration of herding. Therefore, under the consideration of herding the TA method becomes less relevant for less precise information but even more relevant for rather precise information. Furthermore, the findings from Figure 8.18 suggest to avoid the uniform and random method also in case of herding as long as the TA or the expected return method is available. However, if no sophisticated method is available, the economic capital allocation should follow the random method. Table 8.6 confirms the random method to induce a slightly higher usage concerning the available economic capital ec manifesting by the slightly higher risk levels VARbank. As a consequence, the random method also provides slightly higher expected returns µbank. The following diagram additionally confirms the slightly higher usage of the available economic capital ec by the random method compared to the uniform method on the basis of the methods’ induced confidence levels β.

0.996 TA expected return

0.994

uniform random

0.992

0.99 1

500

1000

1500

2000

Figure 8.19: Out-of-sample results for confidence level β for the nupdates-values [1, 2k] and 25 % herding tendency using ec ≈ 1000, cbank ≈ 150k, nunits = 200, p⌀ = 0.51, vlmin = 12.8 and m = 20k According to Figure 8.19 the confidence level β of the random method lies constantly below that of the uniform method. In general, Figure 8.19 confirms that under the consideration of herding no allocation scheme violates the confidence level of β = 0.99. The so far investigation basically finds the optimal allocation method to be also advantageous in case of a certain herding tendency among the traders. Nevertheless, the question remains whether this is still the case for herding tendencies > 25 %. Therefore, Figure 8.20 gives an overview concerning the development of

185 the TA method’s superiority for further increases in the herding tendency. In order to establish identic initial conditions between the TA and the expected return method, the following analyses adjust, besides ec and cbank, also the value for the minimum limit vlmin. For comparison reasons, the sum of the vlmin-limits constantly corresponds to roughly half of the sum of all limits Σvl.

nupdates = 250: 0.2

0.1

0 0

0.25

0.5

0.75

1

0.5

0.75

1

nupdates = 1k: 0.2

0.1

0 0

0.25

nupdates = 20k: 0.4

0.2

0 0

0.25

0.5

0.75

1

Figure 8.20: Superiority of the TA method concerning µbank for different herding tendencies across the interval [0, 1] compared to the expected return (blue), the uniform (red) and the random method (orange)

186 Figure 8.20 reveals, that for less precise information according to nupdates = 250 the optimal allocation scheme becomes particularly important for the cases of very low and very high herding tendencies. This can be explained by the reduced uncertainty of these extreme cases. In contrast, the volatile herd behavior arising from the 50 % herding tendency, in combination with the low precision of information distinctly increase the uncertainty. This scenario of high uncertainty prevents the TA method outperforming the expected return method in anticipating the future. In case of more precise information according to nupdates = 1k, the TA method is particularly advantageous during the lower and medium herding tendencies. The more precise information enables the TA method to perform solid simulations of the model bank’s trading activity. Only for further increases in the herding tendency the superiority of the TA method vanishes again. For very precise information according to nupdates = 20k, however, the optimal allocation scheme never becomes completely ineffective. Nevertheless, also in the case of highly precise information the superiority of the TA method decreases for an increasing herding tendency. The three diagrams from Figure 8.20 once more clearly suggest disregarding economic capital allocation according to the uniform or random method as long as one of the sophisticated methods is available. This finding is, to a great extent, independent of the intensity of the herding tendency. Figure 8.20 confirms that the superiority of the TA method compared to the alternative methods is not restricted to particular intensities of the herding tendency. Instead, the superiority depends on the interaction of the herding tendency’s intensity with the precision of the information. Nevertheless, there also are no particular levels concerning the precision of the information where the optimal allocation method completely loses its superiority. Even the case of very imprecise information shows situations where the optimal allocation method is clearly advantageous. The herding tendency and the precision of information can be expected to be volatile quantities within a financial institution. Their extents vary for example in dependence of different market situations and/or development stages of the institution. As a consequence, the optimal allocation of economic capital according to the aspects of portfolio theory can impossibly be confirmed useless but rather appears highly recommendable. Furthermore, a financial institution should at least replace rather uniform or random allocation processes concerning economic capital by more sophisticated ones e.g. in the form of the expected return method.

187

8.5 Conclusions on optimal allocation before the background of an uninformed central planner The investigation for the case of an uninformed central planner divides into analyses without and with the consideration of herd behavior and informational cascades. According to the structure of the investigation the following conclusions at first address the analyses without the consideration of herding. The modeling of an uninformed central planner (without the consideration of herding) addresses the question of whether an optimal allocation of economic capital according to portfolio theoretical aspects is still advantageous in case of imprecise information. The setting of imprecise information represents a less simplified reflection of the reality compared to the setting of an omniscient central planner. The model implements this scenario on the basis of a Bayesian updating algorithm. By Bayesian leaning, the central planner rationally gathers information concerning the business units’ probabilities of success. The precision of the information depends on the intensity of the learning. The intensity of the learning itself technically depends on the respective model case’s number of Bayesian updates nupdates. Again, the central planner uses different allocation methods in order to assign the economic capital to the business units. The resulting limit allocations undergo a benchmarking on the basis of the real trading. The findings of the analyses reveal the important role of categorizing the business units into ranks concerning their probability of success. On the basis of such ranks, the optimal allocation mostly outperforms the alternative methods in profit expectation even for the case of less precise information. Only without any consideration of the decision makers’ characteristics according to nupdates = 1, the advantages of the TA method marginalize. For this case, the less complex expected return method performs almost as well as the TA method. For nupdates = 1 the expected returns and therefore the ranking of the business units exclusively arises from the historical returns of the units’ traded financial instruments. The so far modeling and the corresponding analyses’ findings suggest that the optimal allocation of economic capital also performs best for cases of less precise information. Nevertheless, the Bayesian learning central planner of the modeling could become subject to criticism. The relevant literature uses agency theoretical mechanisms in the form of e.g. truth telling mechanisms instead of Bayesian learning.132 However, the Bayesian learning satisfies the portfolio theoretical demands of optimal economic capital allocation very well. The separate process of Bayesian updating just provides input parameters for the subsequent 132

See e.g. Stoughton and Zechner (2007) for an agency theoretical approach dealing with optimal capital allocation.

188 portfolio theoretical computations. In contrast, approaching aspects of portfolio optimization by agency theoretical mechanisms regularly becomes vague. Such models oversimplify the fundamental problematic of optimal economic capital allocation arising from portfolio theoretical claims and the need for delegation: The anticipation of instable correlations. Another point of criticism could concern the use of Bayesian updating to cause confidence level violations in combination with the optimal allocation method for the case of perfect prior probabilities. However, analyses reveal that the use of conservative prior probabilities prevents such violations without eliminating the superiority of the optimal allocation method. The Bayesian learning finally proves to be an appropriate mechanism to model the uninformed central planner rationally learning about the business units’ characteristics while dispensing with any agency theoretical means. Despite the replacement of the omniscient central planner by a Bayesian learning one, the modeling still suffers from a crucial shortcoming. The so far analyses constantly assume the business units to make their trading decisions completely independent of other business units’ trading decisions and prevailing market trends. Therefore, a further modification of the basic model concerns the consideration of herd behavior and informational cascades. The resulting analyses address the question whether the restriction of the independent decision making of the business units reduces the relevance of allocating economic capital optimal according to portfolio theoretical aspects. Also the modeling of herd behavior and informational cascades applies a Bayesian updating algorithm. In this case, however, the Bayesian updating uses the trading directions of the trading decisions instead of the final trading results. The single traders use this second Bayesian learning in order to estimate the current market trend. Depending on the learning’s outcome, the respective trader decides to follow the market trend and herd or decides to follow his individual information. The first analyses assume a herding tendency of the traders of 25 %. In this case, only every fourth trader potentially herds. However, already for this herding tendency, portfolios with the same amount in long and short positions hardly occur. In contrast, without taking herding into account, such portfolios are the most likely ones. As a consequence of the herding, the risk level of the model bank increases considerably for the same amount of invested capital and the RORAC-values of the model bank decline, no matter which method the bank uses for economic capital allocation. Finally, also the investigations under consideration of a 25 % herding tendency confirm the advantageousness of an optimal economic capital allocation. Nevertheless, this finding is twofold: While the herding weakens the superiority of the

189 optimal allocation method for states of less precise information, this effect turns around as soon as the precision of information achieves a certain level. From then on, the superiority even slightly exceeds that experienced without the consideration of herding. These findings, however, exclusively refer to the case of the moderate herding tendency of 25 %. Therefore, the investigations furthermore address the variation of the herding tendency. The corresponding analysis gives an overview concerning the impact of herding tendencies from 0 % to 100 %. Simultaneously to the variation of the herding tendency, the analysis also distinguishes between three degrees of the precision of information. The analyses’ outcomes suggest the superiority of the optimal allocation method to depend on the interaction of the precision of information and the intensity of the herding tendency. The analysis does not identify a precision of information or intensity of the herding tendency completely excluding the superiority of an optimal allocation. Only for some particular value combinations the optimal method’s superiority compared to the expected return method shrinks to a minimum. Nevertheless, the precision of information and the herding tendency within financial institutions do not represent constant quantities. Instead, both quantities can be expected to change during the course of time depending on the respective market situation and/or development stage of the respective institution. For this scenario, the findings clearly suggest to allocate economic capital optimally according to portfolio theoretical aspects or at least according to the expected returns of the capital addressees. The findings clearly advise against the use of uniform or random allocation methods as long as a sophisticated allocation method is available. The investigations on an uninformed central planner reveal that, despite the uncertainty arising from the autonomous decision making of the traders, no matter with or without the consideration of herd behavior, an optimal allocation of economic capital according to portfolio theoretical aspects remains highly relevant. A point of criticism could concern the design of the analysis: Though the central planner has to learn about the skill levels of the traders, he exactly knows their tendency to herd. Similar to the skill levels, also in case of the herding tendency a certain bias would be plausible. However, for reasons of simplification, the analysis focusses on the effect of herding itself and not on the effect of the false estimation of the herding tendency. Nevertheless, also the false estimation represents an interesting and important scenario. An underestimation surely causes severe confidence level violations. In contrast, an overestimation and therefore the conservative assumption of a very high herding tendency by default could eliminate any advantages of an optimal allocation of economic capital. Such is-

190 sues represent one of many potential extensions of the model case considering an uninformed central planner to be addressed by future research. The modeling of herding could be target of criticism for another reason. The impact of herding in the model does not depend on particular market situations which induce weaker or stronger correlated stock returns. Instead, the model constantly uses the same GBM stock-returns and the respective correlations for reasons of simplification. Therefore, the impact of herding does not depend on the correlations of the stock returns but on given herding tendencies just enforcing a certain rate of traders potentially exhibiting herd behavior. In context with the present investigations on the optimal allocation of economic capital, however, the modeling on the basis of herding tendencies is appropriate. The alternative use of differently correlated returns can be expected to make no difference concerning the present analyses’ fundamental result. This result consists in the confirmation of the general relevance of optimal economic capital allocation in case of herding.

191

9

CONCLUSIONS

9.1 Summary of results The present research approach reveals the optimal allocation of economic capital according to portfolio theoretical aspects to outperform alternative allocation methods. This at least holds true for the present model assuming, among other things, rational behavior of all participating players. The approach solves the technical challenges of the required optimization of VAR limits by adjusting the implementation of the TA-algorithm. As a characteristic of optimizing VAR limits the maximum feasible sum of the limits remains unknown until the optimization ends. This fact complicates the determination of the necessary modification of the limits during the search process. The exact extent of the modification manifests by the sequence of the transfer values. Analyses suggest using absolute and randomized transfer values in context with the present model. Additionally, these transfer values should successively decrease exponentially over the optimization steps. A corresponding determination of the transfer values particularly makes sense for the case of less intelligent start solutions. Then this transfer value method increases the robustness of the TA-algorithm with regard to the use of less appropriate parameterizations concerning the algorithm’s remaining parameters. Using this transfer value method causes high quality solutions being more likely. The basic model then serves for a benchmark study in order to investigate on the superiority of the optimal allocation of economic capital. The basic model uses an informed central planner knowing the characteristics of the different capital addressees exactly. For this case of highly precise information the optimal allocation method clearly outperforms the alternative allocation methods. The analysis emphasizes an important advantage of the optimal allocation method: its flexibility. The method accomplishes to fully use the given investment capital and economic capital for most of the cases on the basis of hedging strategies. As a consequence, the method’s limit allocations commonly induce the highest profit expectations. However, the optimal allocation method also remains superior for settings where also the most promising competitive method achieves full usage of the available resources. This competitive method allocates economic capital exactly proportional to the expected returns of the capital addressees. For the rest of the research approach, the analyses always induce the same use of the available resources by the two most promising and competitive allocation methods.

192 The superiority of the optimal allocation scheme undergoes further tests on the basis of the basic model which successively impose additional more realistic settings: The tests introduce minimum limits preventing unrealistically low limits, a general reduction of the prospects of success of the capital addressees and a distinct increase of the degree of diversification of the model bank by increasing the number of capital addressees and business units respectively. Only the reduction of the prospects of success temporarily causes the superiority of the optimal allocation to decrease while the remaining adjustments cause the superiority to rather increase again. Subsequently, the model undergoes a fundamental modification in the form of the replacement of the informed central planner by a more realistic uninformed one. The uninformed planner acquires information concerning the decision makers’ characteristics by rational Bayesian learning. As a consequence the precision of the information varies according to the intensity of the Bayesian learning. Analyses on the basis of less precise information reveal that the superiority of the optimal allocation method particularly depends on the information concerning the ranking among the decision makers concerning their prospects of success. In contrast, the exact prospects of success appear to be less decisive as long as the ranking features a certain quality level. However, even low intensities of Bayesian learning and therefore less precise rankings allow the optimal allocation method to outperform the alternative allocation methods. This even holds true for the case of additional bias stemming from applying Bayesian learning on the basis of conservative prior probabilities. The conservative priors’ positive effects outweigh their negative effects: On the one side they prevent confidence level violations occurring for the case of generating limit allocations on the basis of less precise information and rankings. On the other side, the conservative priors only cause slight reductions of the optimal allocation method’s expected profits. Furthermore, the bias from using conservative priors naturally reduces for the case of very intense Bayesian learning ensuring the optimal allocation method to clearly outperform the alternative allocation methods in these cases. Finally, also the consideration of an uninformed central planner does not hinder the superiority of optimal bank management in a portfolio theoretical sense on the basis of VAR limits, rational behavior of the participants provided. A further central model extension on the basis of an uninformed central planner consists in the additional consideration of herd behavior among the capital addressees and decision makers respectively. The consideration of herd behavior replaces the assumption of decision makers which exclusively make decisions in dependence on their individual information. Under the consideration of herding, the decision makers draw conclusions on the prevailing market trend by observing the trading decisions of their colleagues. The model implements these processes on the basis of a second Bayesian updating algorithm. Depending on the

193 outcome of this Bayesian updating in the form of an approximation of the market trend, the decision makers follow the trend approximation or their individual information. Following the trend causes the alignment of the behavior of the traders decreasing diversification effects and increasing the general risk level. The consideration of herd behavior induces additional uncertainty to the model case of an uninformed central planner. The analyses at first address a weaker tendency of the decision makers to herd. Already this case reveals certain setbacks concerning the optimal allocation method’s superiority. Nevertheless, complete losses of the superiority only occur in cases of very imprecise information. As soon as the precision exceeds a certain level, the superiority of the optimal allocation method re-establishes. For highly precise information, the superiority even becomes more distinct compared to the case without the consideration of herding. Finally, the analyses basically reveal the optimal allocation method to successfully anticipate herd behavior, the rational behavior of the participants again provided. The analyses considering herd behavior also address the impact of differently strong herding tendencies. As a result, the dependency of the superiority of the optimal allocation method on the herding tendency’s strength in turn depends on the information’s precision. In case of less precise information the optimal allocation method in particular outperforms for extremely low and also for extremely high herding tendencies. The one-sidedness of the decision makers’ behavior under these extreme tendencies causes a certain reduction of the uncertainty. In case of less precise information, in particular the optimal allocation method can anticipate this reduction of uncertainty and develops superior limit allocations. More precise information causes the superiority of the optimal allocation method to be generally high and only decline for high herding tendencies. As a consequence, for scenarios with more precise information and high herding tendencies the optimal allocation method only performs as well as the expected return method. Only in cases of highly precise information, the superiority of the optimal allocation method even endures for herding tendencies of 100 %. Finally, the optimal method outperforms the alternative methods for most of the analyses’ combinations concerning the precision of the information and intensity of the herding tendency. As a consequence of this result, a bank should, despite the phenomenon of herding, allocate economic capital according to the optimal method. Furthermore, the findings clearly suggest avoiding the uniform or the random economic capital allocation in order to maximize the expected profit. At least as long as the optimal or the expected return method is available. This finding holds true independent of whether the analyses consider an informed or an uninformed Bayesian learning central planner and also whether the analyses additionally consider herding or not.

194

9.2 Closing remarks on the model assumptions and suggested future research The present model of optimal economic capital allocation represents a reduced reflection of the actual situation of corporate management on the basis of VAR limits. This already manifests by the modeling of a bank in the form of a trading department exclusively considering the market risk perspective standing for the whole range of business and risk categories. In practice, a consistent corporate management on the basis of VAR limits integrating any category of business and risk occurring within an institution still appears unfeasible. Not to mention a simultaneous comprehensive consideration of portfolio theoretical aspects. In the model, this even includes the individual prospects of success of the decision makers. Nevertheless, the present model represents an appropriate means in order to consistently demonstrate important coherences within such corporate management systems. Further simplifications concern the structure of the model bank and the organization of the bank’s business activity. The model bank only consists of two hierarchical levels, the central management and the business units. The business units finally represent single decision makers each trading in exactly one particular stock. The traders can choose between building a long or a short position. The position building generally takes place in the morning while each position is closed in the evening in any case. Suspending the trading or inter- and intra-daytrading remain disregarded. However, the use of a more differentiated modeling concerning these issues cannot be expected to yield basically new or different insights. The implementation of any of these issues would mainly considerably increase the model’s complexity and implementation efforts. More influence on the model’s findings potentially arises from assuming the traders to always use their VAR limits to full capacity. Also in practice such limits can be understood synonymous with the responsibility to operate business volumes preventing leaving shares of the limits unused. Nevertheless, in practice, the average rate of limit usage lies below 100 %. In the model, however, the incomplete usage of the limits would severely disturb the transmission and implementation of the central management’s business strategy. Uncertainty concerning the usage rate of the VAR limits potentially causes the optimal allocation of economic capital to become ineffective. This could only be mitigated through also anticipating the uncertainty concerning the usage rates of the limits by the allocation of economic capital. These issues on the incomplete usage of the limits and the resulting difficulties suggest further research. Also in case of the learning central planner the current modeling could undergo a certain refinement. The current implementation of the Bayesian learning algo-

195 rithm implicitly assumes a just founded model bank. As a consequence, the outcomes of the Bayesian learning are all based on the identic amount of inferences and therefore feature the identic level of precision. Instead, heterogeneous levels of precision would reflect e.g. the impact of fluctuation among the employees of the model bank. Nevertheless, whether heterogeneous levels of precision would yield deeper insights remains rather uncertain. However, the requirement for further research definitely arises from the modeling of the uninformed central planner anticipating the herd behavior of the decision makers. In the model, the planner assumes exactly the same herding tendency the decision makers in fact exhibit. A certain bias between the assumption of the planner and the actual tendency, however, would be more plausible. The central planner could for example intentionally overestimate the herding tendency for safety reasons. How does the potentially resulting bias influence the relevance of optimal economic capital allocation? Answering this and related questions requires further investigations. Furthermore, the current approach analyses the optimal allocation of economic capital on the basis of a static model. The allocation of VAR limits always refers to one single trading day. Instead, a dynamic model would open up further possibilities to investigate on corporate management on the basis of VAR limits. A dynamic model could consider a sequence of trading days. The optimal allocation of limits could then refer to longer periods e.g. one year and 250 trading days respectively. Furthermore, the model could consider the use of interim profits and losses which could e.g. increase or decrease the respective business unit’s limit. Within a dynamic model, a separate and independent trading simulation for the Bayesian learning would become unnecessary. The Bayesian learning could process the actual daily trading results occurring during the sequence of trading days. As a consequence, the learning concerning the characteristics of the decision makers would also automatically take into account the influence of the herd behavior. Nevertheless, the proper investigation of optimal economic capital allocation within a dynamic model also requires further separate research. Finally, the model provides several starting-points for refinements and extensions and the corresponding analyses. At the same time, already the present form of the model deepens the understanding of optimal corporate management on the basis of VAR limits. The model dispenses with the use of concealing and simplifying global assumptions for resolving the difficult portfolio theoretical matters. Instead, the model faces these important matters by implementing and exploring the relevant coherences on the basis of a reduced level of abstraction.

197

APPENDIX Appendix 1:

Used order of the model’s used stocks from the S&P 500 ....... 198

Appendix 2:

Calculation of the beta density function of the traders’ performances ............................................................................ 199

Appendix 3:

Calculation of the cumulative beta density function of the traders’ performances ......................................................... 200

Appendix 4:

Random generation of beta distributed trader performances .... 200

Appendix 5:

Exemplary list of business units’ probabilities of success p ~ B(1, 9) from the interval [0.5, 1] for a model bank with 200 units ........................................................................... 201

Appendix 6:

Calculation of the trader types’ occurrences through customization of the corresponding cumulative beta density function ................................................................. 202

Appendix 7:

Determination of the model’s trader performances .................. 202

Appendix 8:

Calculation of the beta density function of market trend types .......................................................................................... 203

Appendix 9:

Determination of the model’s market trends ............................ 203

Appendix 10: Calculation of the cumulative beta density function of market trend types .................................................................... 204 Appendix 11: Calculation of the market trend types’ occurrences through customization of the corresponding cumulative beta density function ................................................................. 205

198

Appendix 1: Used order of the model’s used stocks from the S&P 500 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

MMM ABT ANF ACN ACE ADBE AMD AES AET AFL A GAS APD ARG AKAM AA ATI AGN ALL ANR ALTR MO AMZN AEE AEP

26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

AXP AIG AMT AMP ABC AMGN APH APC ADI AON APA AIV APOL AAPL AMAT ADM AIZ T ADSK ADP AN AZO AVB AVY AVP

51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75

BHI 76 BLL 77 BAC 78 BCR 79 BAX 80 BBT 81 BEAM 82 BDX 83 BBBY 84 BMS 85 BRK.B 86 BBY 87 BIG 88 BIIB 89 BLK 90 HRB 91 BMC 92 BA 93 BWA 94 BXP 95 BSX 96 BMY 97 BRCM 98 BFB 99 CHRW 100

CA CVC COG CAM CPB COF CAH CFN KMX CCL CAT CBG CBS CELG CNP CTL CERN CF SCHW CHK CVX CMG CB CI CINF

101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125

CTAS CSCO C CTXS CLF CLX CME CMS COH KO CCE CTSH CL CMCSA CMA CSC CAG COP CNX ED STZ CBE GLW COST CVH

126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150

COV CCI CSX CMI CVS DHI DHR DRI DVA DF DE DELL DNR XRAY DVN DV DO DTV DFS DISCA DLTR D RRD DOV DOW

151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175

DPS DTE DD DUK DNB ETFC EMN ETN EBAY ECL EIX EW EP EA EMC EMR ETR EOG EQT EFX EQR EL EXC EXPE EXPD

176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200

ESRX XOM FFIV FDO FAST FII FDX FIS FITB FHN FSLR FE FISV FLIR FLS FLR FMC FTI F FRX FOSL BEN FCX FTR GME

199

Appendix 2: Calculation of the beta density function of the traders’ performances f ( p) =

( p − p) α −1 ( p − p) β −1 1 B(α , β ) ( p − p ) α + β −1 p

where B(α , β ) = ∫ p

( p − p ) α −1 ( p − p ) β −1 ( p − p ) α + β −1

α , β > 0, α = 1, β = 9, p ≤ p ≤ p, p =

dp,

1 and p = 1. 2

1

B(1, 9) = ∫ 512(1 − p ) 8 dp = 1 2

1  512  1  9  1  512 9   − 9 (1 − p)  1 = − − 9 1 − 2   = 9 .       2

f ( p ) = 9 ⋅ 512(1 − p) 8 = 4608(1 − p) 8 .

200

Appendix 3: Calculation of the cumulative beta density function of the traders’ performances F ( p) =

p

where B p (α , β ) = ∫ p

B p (α , β ) B(α , β )

( p − p) α −1 ( p − p) β −1 ( p − p) α + β −1

dp,

1 1 α , β > 0, α = 1, β = 9, p ≤ p ≤ p, p = , p = 1 and B(α , β ) = . 2 9 pi

F ( p ) = 9 ∫ 512(1 − p) 8 dp = 1 2

pi

1  512   512 (1 − p ) 9 +  = (1 − p ) 9  = 9 − 9− 1 9 9 9    2

1 − 512(1 − p ) 9 = 1 − (2 − 2 p) 9 .

Appendix 4: Random generation of beta distributed trader performances F ( p) = 1 − (2 − 2 p) 9 , 1

1 F −1 ( y ) = p = 1 − (1 − y ) 9 . 2 For α = 1, β = 9, 0.5 ≤ pi ≤ 1, U i ~ U(0, 1) and n = 1,000 1

n 1 there is pi = pi ~ B(1, 9) = 1 − (1 − U i ) 9 . 2 i =1

201

Appendix 5: Exemplary list of business units’ probabilities of success p ~ B(1, 9) from the interval [0.5, 1] for a model bank with 200 units 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

0.516 0.521 0.546 0.544 0.551 0.537 0.503 0.598 0.593 0.506 0.501 0.546 0.529 0.532 0.570 0.512 0.543 0.500 0.512 0.565 0.526 0.516 0.505 0.522 0.503

26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

0.570 0.501 0.695 0.506 0.529 0.504 0.598 0.588 0.620 0.531 0.510 0.533 0.500 0.522 0.541 0.556 0.505 0.550 0.525 0.524 0.529 0.584 0.552 0.558 0.696

51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75

0.577 0.507 0.501 0.502 0.505 0.634 0.599 0.612 0.565 0.529 0.506 0.520 0.514 0.504 0.536 0.545 0.513 0.551 0.530 0.515 0.501 0.502 0.513 0.616 0.533

76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100

0.588 0.712 0.503 0.522 0.546 0.522 0.804 0.626 0.563 0.582 0.547 0.525 0.538 0.513 0.610 0.530 0.536 0.548 0.590 0.577 0.510 0.581 0.504 0.536 0.610

101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125

0.555 0.541 0.540 0.501 0.537 0.591 0.549 0.630 0.516 0.615 0.530 0.507 0.520 0.533 0.520 0.543 0.567 0.545 0.516 0.554 0.660 0.553 0.506 0.506 0.606

126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150

0.546 0.500 0.533 0.514 0.556 0.509 0.744 0.528 0.592 0.543 0.580 0.581 0.523 0.501 0.547 0.515 0.557 0.516 0.545 0.566 0.519 0.529 0.557 0.662 0.531

151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175

0.517 0.513 0.588 0.591 0.569 0.524 0.565 0.570 0.669 0.582 0.563 0.587 0.516 0.638 0.574 0.572 0.533 0.532 0.567 0.565 0.618 0.567 0.536 0.506 0.677

176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200

0.530 0.549 0.524 0.613 0.541 0.523 0.546 0.503 0.543 0.525 0.578 0.575 0.512 0.539 0.514 0.561 0.567 0.545 0.576 0.548 0.513 0.561 0.504 0.550 0.524

202

Appendix 6: Calculation of the trader types’ occurrences through customization of the corresponding cumulative beta density function K

θ k = F ( p k +1 ) − F ( p k ) k =1

where F ( x) = 1 − (2 − 2 x) 9 , p k +1 = p + ∆ p k and p k = p + ∆ p (k − 1). K

θ k = 1 − (2 − 2( p + ∆ p k )) 9 − (1 − (2 − 2( p + ∆ p (k − 1))) 9 ) k =1

( p − p) 1 1 . where K = 1,000, p = , p = 1, ∆ p = = K 2,000 2 9 9 K  1 1   1    1 θ k = 1 −  2 − 2 + k   − 1 −  2 − 2 + (k − 1)    = k =1     2 2,000  2 2,000     

9

9

k   (k − 1)    .  − 1 − 1 −  1,000   1,000 

Appendix 7: Determination of the model’s trader performances K 1 p k = p + ∆ p (k − 1) + (∆ p k − ∆ p (k − 1)) 2 k =1

( p − p) 1 1 = . where K = 1,000, p = , p = 1, ∆ p = K 2,000 2 K

pk = k =1

1  1,999  + k .  2,000  2 

203

Appendix 8: Calculation of the beta density function of market trend types 1 q α −1 (1 − q) β −1 B(α , β )

f (q) =

p

where B(α , β ) = ∫ q α −1 (1 − q) β −1 dq, p

α , β > 0, α = 2, β = 2, q ≤ q ≤ q , q = 0 and q = 1 b

by



b

f ( x) g ′( x)dx = [ f ( x) g ( x)]ba − ∫ f ′( x) g ( x)dx.

a

a

′( x ) 8 } 6g7 q (1 − q)dq =

1 f ( x)

B(2, 2) = ∫ 0 1

1

  1 1 1 1 3 2 2 q − 2 (1 − q)  − ∫ − 2 (1 − q) dq =  6 (1 − q)  = 6 , 0     0 0 1

f (q) = 6q(1 − q).

Appendix 9: Determination of the model’s market trends K 1 q k = ( ∆q k − ∆q (k − 1)) + ∆q (k − 1) 2 k =1

where K = 1,000, p = 0, p = 1, ∆q =

K

qk = k =1

1  1  k − . 1,000  2

( p − p) K

=

1 . 1,000

204

Appendix 10: Calculation of the cumulative beta density function of market trend types F (q) =

B q (α , β ) B(α , β ) qi

where B q (α , β ) = ∫ q α −1 (1 − q) β −1 dq, q

α , β > 0, α = 2, β = 2, q ≤ q ≤ q , q = 0, q = 1 and B(α , β ) = b

by



b

f ( x) g ′( x)dx = [ f (x )g (x )]ba − ∫ f ′( x) g ( x)dx. a

a



( x) g ( x) qi f } 678 F (q) = 6 ∫ q (1 − q )dq = 0

qi qi   1    1 6 q − (1 − q ) 2  − ∫ − (1 − q ) 2 dq  =   2  0 q 2  

q

i 1  − 3(1 − q ) 2 − 6  (1 − q ) 3  = 6 0

− 3q(1 − q ) 2 − (1 − q) 3 + 1 = −2q 3 + 3q 2 .

1 6

205

Appendix 11: Calculation of the market trend types’ occurrences through customization of the corresponding cumulative beta density function K

ψ k = F (q k +1 ) − F (q k ) k =1

where F ( x) = −2 x 3 + 3x 2 , q k +1 = ∆q (k + 1) and q k = ∆q k . K

ψ k = −2( ∆q (k + 1)) 3 + 3( ∆q (k + 1)) 2 − (−2( ∆q k ) 3 + 3( ∆q k ) 3 ) k =1

where K = 1,000, p = 0, p = 1, ∆q =

3

2

( p − p) K 3

=

1 . 1,000 2

K  k   k   (k − 1)   (k − 1)  ψ k = −2  + 3  + 2  − 3  . k =1  1,000   1,000   1,000   1,000 

207

REFERENCES Acciaio, B. and Penner, I. (2010a). Dynamic risk measures. Working paper. Acciaio, B., Föllmer, H. and Penner, I. (2010b). Risk assessment for uncertain cash flows: Model ambiguity, discounting ambiguity, and the role of bubbles. Working paper. Acerbi, C. (2002). Spectral measures of risk: A coherent representation of subjective risk aversion. Journal of Banking & Finance 26(7), 1505-1518. Acerbi, C. and Tasche, D. (2002). On the coherence of expected shortfall. Journal of Banking & Finance 26(7), 1487-1503. Albrecht, P. (2003). Risk based capital allocation. Working paper. Contribution prepared for: Encyclopedia of actuarial science. Alexander, G. J., A. M. Baptista (2002). Economic Implications of Using a Mean-VAR Model for Portfolio Selection: A Comparison with MeanVariance Analysis, 26(7/8), 1159-1193. Althöfer, I. and Koschnick, K.-U. (1991). On the convergence of threshold accepting. Applied Mathematics and Optimization 24(1), 183-195. Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. (1997). Thinking coherently. Risk 10(11), 68-71. Artzner, P., Delbaen, F., Eber, J.-M., and Heath, D. (1999). Coherent measures of risk. Mathematical Finance 9(3), 203-228. Artzner, P., Delbaen, F., Eber, J.-M., Heath, D. and Ku, H. (2007). Coherent multiperiod risk adjusted values and Bellman’s principle. Annals of Operations Research 152(1), 5-22. Auer, M. and von Pfoestl, G. (2011). Basel III Handbook. Accenture. http://www.accenture.com/SiteCollectionDocuments/PDF/Financia Services/Accenture-Basel-III-Handbook.pdf. Cited 15 Feb 2013. Basak, S., A. Shapiro (2001). Value-at-Risk Based Risk Management: Optimal Policies and Asset Prices, Review of Financial Studies, 14(2), 371-405. Basel Committee on Banking Supervision (1996a). Amendment to the Capital Accord to Incorporate Market Risks, Bank for International Settlements. Basel Committee on Banking Supervision (1996b). Supervisory Framework for the Use of “Backtesting” in Conjunction with the Internal Models Approach to Market Risk Capital Requirements, Bank for International Settlements. Basel Committee on Banking Supervision (2003). Sound Practices for the Management and Supervision of Operational Risk, Bank for International Settlements.

208 Basel Committee on Banking Supervision (2011). Basel III: A global regulatory framework for more resilient banks and banking systems, revised version, Bank for International Settlements. Beeck, H., L. Johanning, B. Rudolph (1999). Value-at-Risk-Limitstrukturen zur Steuerung und Begrenzung von Marktrisiken im Aktienbereich, ORSpektrum, 21(1-2), 259-286. Bellman, R. (1954). On the theory of dynamic programming. Paper for the meeting of the American Mathematical Society. Best, P. W. (1998). Implementing Value at Risk, Chichester. Bion-Nadal, J. (2008). Dynamic risk measures: Time consistency and risk measures from BMO martingales. Finance and Stochastics 12(2), 219-244. Birge, R. B. and Louveaux, F. (1997). Introduction to Stochastic Programming. New York, Springer. Bock, R. K. and Krischer, W. (2013). The Data Analysis BriefBook, online version 16. http://rkb.home.cern.ch/rkb/titleA.html. Cited 29. April 2013. Böve, R., C. Hubensack, A. Pfingsten (2006). Ansätze zur Ermittlung des Gesamtbankrisikos, Zeitschrift für das gesamte Kreditwesen, 59(13), 17-23. Boyd, S. and Vandenberghe, L. (2009). Convex optimization. Cambridge University Press. Buch, A. and Dorfleitner, G. (2008). Coherent risk measures, coherent capital allocations and the gradient allocation principle. Insurance: Mathematics and Economics 42(1), 235-242. Burghof, H.-P. (1998). Eigenkapitalnormen in der Theorie der Finanzintermediation, Berlin. Burghof, H.-P. and Müller, J. (2007). Risikokapitalallokation als Instrument der Banksteuerung. Zeitschrift für Controlling & Management 51(1), 18-27. Burghof, H.-P. and Müller, J. (2009). Allocation of Economic Capital in Banking: A Simulation Approach. In: the VaR Modeling Handbook (ed. G. N. Gregoriou), 201-226. McGraw-Hill. Burghof, H.-P. and Müller, J. (2012). Allocation of Economic Capital in Banking: A Simulation Approach. In: High Performance Computing in Science and Engineering '11 (eds. W. E. Nagel, D. B. Kröner and M. M. Resch), 541-549. Springer. Burghof, H.-P. and Müller, J. (2013a). Parameterization of threshold accepting: The case of economic capital allocation. In: High Performance Computing in Science and Engineering '12 (eds. W. E. Nagel, D. B. Kröner and M. M. Resch), 517-530. Springer. Burghof, H.-P. and Müller, J. (2013b). Optimal versus alternative economic capital allocation in banking. In: High Performance Computing in Science and Engineering '13 (eds. W. E. Nagel, D. B. Kröner and M. M. Resch), 579-589. Springer.

209 Burghof, H.-P. and Rudolf, B. (1996). Bankenaufsicht – Theorie und Praxis der Regulierung, Wiesbaden. Burghof, H.-P. and Sinha, T. (2005). Capital allocation with value-at-risk – the case of informed traders and herding. Journal of Risk 7(4), 47-73. Campbell, R., R. Huisman, K. Koedijk (2001). Optimal Portfolio Selection in a Value-at-Risk Framework, Journal of Banking & Finance, 25(9), 1789-1804. Cheridito, P., Delbaen, F. and Kupper, M. (2006). Dynamic monetary risk measures for bounded discrete-time processes. Electronic Journal of Probability 11(3), 57-106. Croughy, M, D. Galai and R. Mark (1999). The New 1998 Regulatory Framework for Capital Adequacy: “Standardized Approach” versus “Internal Models”, The Handbook of Risk Management and Analysis (ed. Carol Alexander), Chichester, 1-37. Crouhy, M., S. M. Turnbull and L. Wakeman (1999). Measuring Risk-Adjusted Performance, Journal of Risk, 2(1), 4-35. Denault, M. (2001). Coherent allocation of risk capital. Journal of Risk 4(1), 1-34. Detlefsen, K. and Scandolo, G. (2005). Conditional and dynamic convex risk measures. Finance and Stochastics 9(4), 539-561. Deutsche Bundesbank (2011). Base III – Leitfaden zu den neuen Eigenkapital und Liquiditätsregeln für Banken. http://www.bundesbank.de/Redaktion/DE/Downloads/Veroeffentlichungen /Buch_Broschuere_Flyer/bankenaufsicht_basel3_leitfaden.pdf?__blob= publicationFile. Cited 30 May 2013. Dimson, E., P. Marsh (1995). Capital Requirements for Securities Firms, Journal of Finance, 50(3), 821-851. Dowd, K. (1998). Beyond Value at Risk – The New Science of Risk Management, Chichester. Dresel, T., R. Härtl, L. Johanning (2002). Risk Capital Allocation Using Value at Risk Limits if Correlations Between Traders’ Exposures are Unpredictable, European Investment Review, 1(1), 57-64. Erasmus, M., S. Morrison (undated report). Relating Risk to Profitability, Coopers and Lybrand. Fama, E. (1970). Efficient Capital Markets: A Review of Theory and Empirical Work, Journal of Finance, 25(2), 383-417. Föllmer, H. and Schied, A. (2002). Convex measures of risk and trading constraints. Finance and Stochastics 6(4), 429-447. Frey, R. and McNeil, A. J. (2002). VAR and expected shortfall in portfolios of dependent credit risk: conceptual and practical insights. Journal of Banking & Finance 26(7), 1317-1334.

210 Frittelli, M. and Gianin, E. R. (2002). Putting order in risk measures. Journal of Banking & Finance 26(7), 1473-1486. Froot, K. A. and Stein, J. C. (1998). Risk management, capital budgeting and capital structure policy for financial institutions: an integrated approach. Journal of Financial Economics 47(1), 55-82. Fylstra, D. (2005). Introducing convex and conic optimization for the quantitative finance professional. Wilmott magazine, 18-22. http://www.solver.com/ConvexConicWilmott.pdf. Cited 15 Oct 2012. Gaivoronski, A. and Pflug, G. (2005). Value at risk in portfolio optimization: properties and computational approach. Journal of Risk 7(2), 1-31. Giese, G. (2006). A saddle for complex credit portfolio models. Risk 19(7), 84-89. Gilli, M., Kellezi, E. and Hysi, H. (2006). A data-driven optimization heuristic for downside risk minimization, Journal of Risk 8(3), 1-19. Gilli, M. and Schumann, E. (2010). Heuristic Optimization in Financial Modelling. Working paper. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1277114. Cited 20 Jul 2012. Gilli, M. and Schumann, E. (2011). Optimal enough? Journal of Heuristics 17(4), 373-387. Gilli, M. and Winker, P. (2009). Heuristic optimization methods in econometrics. In: Handbook of computational econometrics (eds. D. A. Belsley and E. J. Kontoghiorghes), 81-120. Wiley. Glasserman, P. (2005). Measuring marginal risk contributions in credit portfolios. Journal of Computational Finance 9(2), 1-41. Glasserman, P. and Li, J. (2005). Importance sampling for portfolio credit risk. Management Science 51(11), 1643-1656. Gordy, M. B. (2003). A risk-factor model foundation for ratings-based bank capital rules. Journal of Financial Intermediation, 12(3), 199-232. Gordy, M. B. and Lütkebohmert, E. (2007). A granularity adjustment for Basel II. Deutsche Bundesbank Discussion Paper Series 2: Banking and Financial Studies 2007(1), 1-32. Gordy, M. B. and Marrone, J. (2010). Granularity adjustment for mark-tomarket credit risk models. Working papers -- US Federal Reserve Board's Finance & Economic Discussion Series, 1-37. Gourieroux, C., Laurent, J. P. and Scaillet, O. (2000). Sensitivity analysis of value at risk. Journal of Empirical Finance 7(3-4), 225-245. Hull, J. C. (2012). Risk management and financial institutions. 3rd ed., Wiley, New Jersey. Jobert, A. and Rogers, L. C. G. (2008). Valuations and dynamic convex risk measures. Mathematical Finance 18(1), 1-22. Jorion, P. (2001). Value at Risk, 2nd ed., New York.

211 Jorion, P. (2007). Value at Risk, 3rd ed., New York. Kalkbrener, M. (2005). An axiomatic approach to capital allocation. Mathematical Finance 15(3), 425-437. Kalkbrener, M. (2009). An axiomatic characterization of capital allocations of coherent risk measures. Quantitative Finance 9(8), 961-965. Kalkbrener, M., Lotter, H. and Ludger, O. (2004). Sensible and efficient capital allocation for credit portfolios. Risk 17(1), 19-24. Knuth, D. (1997). The Art of Computer Programming volume 3, 3rd ed, Addison-Wesley. Krokhmal, P., Palmquist, J. and Uryasev, S. (2001). Portfolio optimization with conditional value at risk objective and constraints. Journal of Risk 4(2), 43-68. Laeven, R. J. A. and Goovaerts, M. J. (2004). An optimization approach to the dynamic allocation of economic capital. Insurance: Mathematics & Economics 35(2), 299-319. LAM/MPI-tutorial (2012). One-step tutorial: MPI: It’s easy to get started. http://www.lam-mpi.org/tutorials/one-step/ezstart.php. Cited 22 Oct 2012. Larsen, N., Mausser, H. and Uryasev, S. (2001). Algorithms for optimization of value at risk. In: Financial Engineering, E-commerce and Supply Chain (eds. P. Pardalos and V. K. Tsitsiringos). Kluwer. Laux, H., F. Liermann (2005). Grundlagen der Organisation – Die Steuerung von Entscheidungen als Grundproblem der Betriebswirtschaftslehre, 6. Aufl., Berlin. Limberti, L., Nannicini, G. and Mladenovic, N. (2009). A good receipe for solving MINLPs. In: Matheuristics: Hybridizing metaheuristics and mathematical programming (Annals of information systems, volume 10, eds. V. Maniezzo, T. Stützle and S. Voß). Springer. Lister, M. (1997). Risikoadjustierte Ergebnismessung und Risikokapitalallokation, Frankfurt am Main. Lütkebohmert, E. (2009). Failure of saddle-point method in the presence of double defaults. Bon Econ Discussion Papers 2009(19), 1-7. Maringer, D. (2005a). Portfolio management with heuristic optimization (eds. H. Amman and B. Rustem). Springer. Maringer, D. (2005b). Distribution assumptions and risk constraints in portfolio optimization. Computational Management Science 2(2), 139-153. Maringer, D. and Winker, P. (2007). The hidden risks of optimizing bond portfolios under VAR. Journal of Risk 9(4), 39-58. Marti, K. (2005). Stochastic Optimization Methods. Berlin, Springer. Martin, R. (2009). Shortfall: who contributes and how much? Risk 22(10), 84-89. Martin, R., Thompson, K. and Browne, C. (2001). VAR: who contributes and how much? Risk 14(8), 99-102.

212 Martin, R. and Wilde, T. (2002). Unsystematic credit risk. Risk Magazine, 15(11), 123-128. Matten, C. (1996). Managing Bank Capital: Capital Allocation and Performance Measurement, Chichester. Matten, C. (2000). Managing Bank Capital: Capital Allocation and Performance Measurement, 2ed., Chichester. Mausser, H. and Rosen, D. (2008). Economic credit capital allocation and risk contributions. In: Handbook in OR & MS 15 (eds. J. R. Birge and V. Linetsky), 681-726. Elsevier. Moscato, P. (1989). On evolution, search, optimization, genetic algorithms and martial arts: Towards memetic algorithms. Report 790, caltech concurrent computation program. Moscato, P. (1999). Memetic algorithms: A short introduction. In: New ideas in optimization (eds. D. Corne, M. Dorigo and F. Glover), 219-234. MacGraw-Hill. Mundt, P. A. (2008). Dynamic risk management with Markov decision processes. Universitätsverlag Karlsruhe. Penner, I. (2007). Dynamic convex risk measures: time consistency, prudence, and sustainability. PhD thesis, Humboldt-Universität zu Berlin, 2007. Perea, C., Baitsch, M., González-Vidosa and Hartmann, D. (2007). Optimization of reinforced concrete frame bridges by parallel genetic and memetic agorithms In: Proceedings of the Third International Conference on Structural Engineering, Mechanics and Computation (ed. A. Zingoni), 17901795. Millpress. Perold, A. F. (2005). Capital allocation in financial firms. Journal of applied corporate finance 17(3), 110-118. Pflug, G. (2000). Some Remarks on the value-at-risk and the Conditional valueat-risk. In: Probabilistic Constrained Optimization: Methodology and Application (ed. S. Uryasev), 272-281, Kluwer, Dordrecht. Rockafellar, R. T. and Uryasev, S. (2000). Optimization of conditional value-atrisk. Journal of Risk 2(3), 21-41. Rockafellar, R. T. and Uryasev, S. (2002). Conditional value-at-risk for general loss distributions. Journal of Banking and Finance 26(7), 1443-1471. Saita, F. (1999). Allocation of Risk Capital in Financial Institutions, Financial Management 28(3), 95-111. Schierenbeck, H., M. Lister (2003). Ertragsorientierte Allokation von Risikokapital im Bankbetrieb, Basel. Shapiro, A., Dentcheva, D., Ruszczynski, A. (2009). Lectures on stochastic programming: Modeling and theory. MPS/SIAM Series on Optimization. Philadelphia, Society for Industrial and Applied Mathematics (SIAM).

213 Sharpe, S. A. (1990): Asymmetric information, bank lending, and implicit contracts: a stylized model of customer relationship. Journal of Finance 45, 1069-1087. Stein, J. C. (1997). Internal Capital Markets and the Competition for Corporate Resources, Journal of Finance, 52(1), 111-133. Stoughton, N. M. and Zechner, J. (2007). Optimal capital allocation using RAROC™ and EVA®. Journal of Financial Intermediation 16, 312-342. Straßberger, M. (2006). Risk limit systems and capital allocation in financial institutions. Banks and Bank Systems 1(4), 22-37. Srinivasan, R. (2002). Importance Sampling: Applications in Communications and Detection. Springer. Tasche, D. (2000). Conditional expectation as quantile derivative. Working paper, Technische Universität München. Tasche, D. (2002). Expected shortfall and beyond. Journal of Banking & Finance 26(7), 1519-1534. Tasche, D. (2004). Allocating portfolio economic capital to sub-portfolios. In: Economic capital: a practitioner’s guide (ed. A. Dev), 275-302. Risk Books. Tasche, D. (2008). Capital allocation to business units and sub-portfolios: the Euler principle. In: Pillar II in the new Basel Accord: the challenge of economic capital (ed. A. Resti), 423-453. Risk Books. Tasche, D. (2009). Capital allocation for credit portfolios with kernel estimators. Quantitative Finance 9(5), 581-595. Tsanakas, A. (2009). To split or not to split: capital allocation with convex risk measures. Insurance: Mathematics and Economics 44(2), 268-277. Tutsch, S. (2008). Update rules for convex risk measures. Quantitative Finance 8(8), 833-843. Wang, C.-P., Shyu, D., Liao, Y. C., Chen, M.-C., Chen, M.-L. (2003). A Model of Optimal Dynamic Asset Allocation in a Value-at-Risk Framework, International Journal of Risk Assessment & Management, 4(4), 301-309. Wang, T. (1999). A class of dynamic risk measures. Working paper, University of British Columbia. West, G. (2010). Coherent VAR-type measures. The southern African treasurer: Special issues on risk management, part 2, 20-23. Williamson, O. E. (1975). Markets and Hierarchies: Analysis and Antitrust Implications, New York. Wilson, T. (1992). RAROC Remodelled, Risk, 5(8), 112-119. Wilson, T. (2003). Overcoming the Hurdle, Risk, 16(7), 79-83. Yiu, K .F. C. (2004). Optimal Portfolios under a Value-at-Risk Constraint, Journal of Economic Dynamics & Control, 28(7), 1317-1334. Zaik, E., J. Walter, J. Kelling and C. James (1996). RAROC at Bank of America: From Theory to Practice, Journal of Applied Corporate Finance, 9(2), 83-93.

Studienreihe der Stiftung Kreditwirtschaft an der Universität Hohenheim Bände 1 - 11 sind nicht mehr lieferbar. Band 12: Axel Tibor Kümmel: Bewertung von Kreditinstituten nach dem Shareholder Value Ansatz, 1994; 2. Aufl.; 1995. Band 13: Petra Schmidt: Insider Trading. Maßnahmen zur Vermeidung bei US-Banken; 1995. Band 14: Alexander Grupp: Börseneintritt und Börsenaustritt. Individuelle und institutionelle Interessen; 1995. Band 15: Heinrich Kerstien: Budgetierung in Kreditinstituten. Operative Ergebnisplanung auf der Basis entscheidungsorientierter Kalkulationsverfahren; 1995. Band 16: Ulrich Gärtner: Die Kalkulation des Zinspositionserfolgs in Kreditinstituten; 1996. Band 17: Ute Münstermann: Märkte für Risikokapital im Spannungsfeld von Organisationsfreiheit und Staatsaufsicht; 1996. Band 18: Ulrike Müller: Going Public im Geschäftsfeld der Banken. Marktbetrachtungen, bankbezogene Anforderungen und Erfolgswirkungen; 1997. Band 19: Daniel Reith: Innergenossenschaftlicher Wettbewerb im Bankensektor; 1997. Band 20: Steffen Hörter: Shareholder Value-orientiertes Bank-Controlling; 1998. Band 21: Philip von Boehm-Bezing: Eigenkapital für nicht börsennotierte Unternehmen durch Finanzintermediäre. Wirtschaftliche Bedeutung und institutionelle Rahmenbedingungen; 1998. Band 22: Niko J. Kleinmann: Die Ausgestaltung der Ad-hoc-Publizität nach § 15 WpHG. Notwendigkeit einer segmentspezifischen Deregulierung; 1998. Band 23: Elke Ebert: Startfinanzierung durch Kreditinstitute. Situationsanalyse und Lösungsansätze; 1998. Band 24: Heinz O. Steinhübel: Die private Computerbörse für mittelständische Unternehmen. Ökonomische Notwendigkeit und rechtliche Zulässigkeit; 1998. Band 25: Reiner Dietrich: Integrierte Kreditprüfung. Die Integration der computergestützten Kreditprüfung in die Gesamtbanksteuerung; 1998. Band 26: Stefan Topp: Die Pre-Fusionsphase von Kreditinstituten. Eine Untersuchung der Entscheidungsprozesse und ihrer Strukturen; 1999. Band 27: Bettina Korn: Vorstandsvergütung mit Aktienoptionen. Sicherung der Anreizkompatibilität als gesellschaftsrechtliche Gestaltungsaufgabe; 2000. Band 28: Armin Lindtner: Asset Backed Securities – Ein Cash flow-Modell; 2001. Band 29: Carsten Lausberg: Das Immobilienmarktrisiko deutscher Banken; 2001. Band 30: Patrik Pohl: Risikobasierte Kapitalanforderungen als Instrument einer marktorientierten Bankenaufsicht – unter besonderer Berücksichtigung der bankaufsichtlichen Behandlung des Kreditrisikos; 2001. Band 31: Joh. Heinr. von Stein/Friedrich Trautwein: Ausbildungscontrolling an Universitäten. Grundlagen, Implementierung und Perspektiven; 2002. Band 32: Gaby Kienzler, Christiane Winz: Ausbildungsqualität bei Bankkaufleuten – aus der Sicht von Auszubildenden und Ausbildern, 2002.

Band 33: Joh. Heinr. von Stein, Holger G. Köckritz, Friedrich Trautwein (Hrsg.): E-Banking im Privatkundengeschäft. Eine Analyse strategischer Handlungsfelder, 2002. Band 34: Antje Erndt, Steffen Metzner: Moderne Instrumente des Immobiliencontrollings. DCFBewertung und Kennzahlensysteme im Immobiliencontrolling, 2002. Band 35: Sven A. Röckle: Schadensdatenbanken als Instrument zur Quantifizierung von Operational Risk in Kreditinstituten, 2002. Band 36: Frank Kutschera: Kommunales Debt Management als Bankdienstleistung, 2003. Band 37: Niklas Lach: Marktinformation durch Bankrechnungslegung im Dienste der Bankenaufsicht, 2003. Band 38: Wigbert Böhm: Investor Relations der Emittenten von Unternehmensanleihen: Notwendigkeit, Nutzen und Konzeption einer gläubigerorientierten Informationspolitik, 2004. Band 39: Andreas Russ: Kapitalmarktorientiertes Kreditrisikomanagement in der prozessbezogenen Kreditorganisation, 2004. Band 40: Tim Arndt: Manager of Managers – Verträge. Outsourcing im Rahmen individueller Finanzportfolioverwaltung von Kredit- und Finanzdienstleistungsinstituten, 2004 Band 41: Manuela A. E. Schäfer: Prozessgetriebene multiperspektivische Unternehmenssteuerung: Beispielhafte Betrachtung anhand der deutschen Bausparkassen, 2004. Band 42: Friedrich Trautwein: Berufliche Handlungskompetenz als Studienziel: Bedeutung, Einflussfaktoren und Förderungsmöglichkeiten beim betriebswirtschaftlichen Studium an Universitäten unter besonderer Berücksichtigung der Bankwirtschaft, 2004. Band 43: Ekkehardt Anton Bauer: Theorie der staatlichen Venture Capital-Politik. Begründungsansätze, Wirkungen und Effizienz der staatlichen Subventionierung von Venture Capital, 2006. Band 44: Ralf Kürten: Regionale Finanzplätze in Deutschland, 2006. Band 45: Tatiana Glaser: Privatimmobilienfinanzierung in Russland und Möglichkeiten der Übertragung des deutschen Bausparsystems auf die Russische Föderation anhand des Beispiels von Sankt Petersburg, 2006. Band 46: Elisabeth Doris Markel: Qualitative Bankenaufsicht. Auswirkungen auf die Bankunternehmungsführung, 2010. Band 47: Matthias Johannsen: Stock Price Reaction to Earnings Information, 2010. Band 48: Susanna Holzschneider: Valuation and Underpricing of Initial Public Offerings, 2011. Band 49: Arne Breuer: An Empirical Analysis of Order Dynamics in a High-Frequency Trading Environment, 2013. Band 50: Dirk Sturz: Stock Dividends in Germany. An Empirical Analysis, 2015. Band 51: Sebastian Schroff: Investor Behavior in the Market for Bank-issued Structured Products, 2015. Band 52: Jan Müller: Optimal Economic Capital Allocation in Banking on the Basis of Decision Rights, 2015.