From the April 01, 2008 issue of Futures Magazine • Subscribe!

Using Leverage Space Model to define risk

The notion of optimal f is not merely a method of determining an optimal point in a risk profile. Done right, it can provide the foundation of a superior portfolio model than what’s being used today. This approach, as it is applied to the multiple component case, is called the Leverage Space Model.

The most common portfolio model in use today is called Modern Portfolio Theory (MPT). It relies on the mean variance of returns to measure risk. One difference between MPT and the Leverage Space Model is the latter defines risk as drawdown, not return variance.

MPT, because it depends on the variance in the returns of its components as a major parameter, assumes the distribution of returns is statistically normal. The Leverage Space Model, however, does not; it assumes that various components have different distributions of returns. So, while MPT, due to its normal assumption, is computationally ill-equipped to deal with, say, fat-tailed distributions, the Leverage Space Model has no trouble with these more realistic situations.

CROSSING DIMENSIONS

Let’s return to our coin-toss game from the first installment. To review, if the coin comes up tails, we lose $1. If it comes up heads, we gain $2. There are two bins, two scenarios, and each has a 0.5 probability of occurring. In short, over time, this positive expectation game does not result in what we expect. The multiple we made on our stake, on average, after each play drops significantly. Now, let’s expand this to include two such games going on at once.

With two games, we have a surface, a terrain in N + 1 dimensional, or 3D, space.

The good news is everything about the single case, discussed in Part I of this series, pertains here. What we know about drawdowns, the danger of being to the right of the peak, reducing drawdowns arithmetically while reducing returns geometrically, is still valid. If we had a portfolio comprised of 100 components, 99 of which were optimal, we could still lose money if only one of the components was so far to the right of its peak.

This is an appropriate time to point out the geometric nature of the Leverage Space Model. Most traders trade in a quantity that is relative to the size of their stake, which is reflected in geometric returns. MPT, on the other hand, relies on arithmetic return. As such, MPT does not take leverage into account. In a world of derivative vehicles and notional funding, a portfolio model must incorporate leverage.

THE FALLACY OF CORRELATIONS

Correlation is a major input in MPT. However, it tends to lead to poisonous assumptions when used in the traditional sense. To demonstrate, we can look at some correlation data for individual equities, equity indexes and commodities on a rolling 200-day window over 20 years. The data bears out the danger of relying on correlation coefficients.

Consider crude oil and gold, the correlation for which was 0.18 for the entire period. However, when crude oil moved in excess of three standard deviations, gold moved much more in lockstep, exhibiting a correlation of 0.61 on such extreme days. On more typical days when crude oil moved less than one standard deviation, gold moved nearly randomly with respect to it, showing a correlation of 0.09.

Ford (F) and Pfizer (PFE) have a similar performance. On all days, the correlation is 0.15, but when the S&P 500 moves greater than three standard deviations, the Ford/Pfizer correlation becomes 0.75. On days where the S&P 500 moves less than one standard deviation, the correlation between Ford and Pfizer shrinks to a mere 0.025.

Clearly, counting on correlation fails you when you need it the most. Most holding periods — most days or most months — are quite typical, not three standard deviation events. However, there are those rare times when big moves do occur that make or break you.

The Leverage Space Model does not rely on correlations. “No link here” shows 20 simultaneous plays of two coin-toss games. The correlation between these two games is 0 (MPT would advise us to allocate 50% toward each game to be on a line between 0,0 and 1,1). The Leverage Space Model says, however, that you should optimally leverage to .23 and .23, giving you a total exposure of 46%, or 46% of your bankroll. This result depends on the scenario parameters and joint probabilities between them, and is independent of this fickle nature of correlations. This situation dooms those following MPT during big-move, high-correlation events. The MPT investor finds himself far to the right of the peak of the f curve and in dangerous territory.

UNDERSTANDING RISK

As stated, risk is defined in the Leverage Space Model as drawdown, and not variance in returns.

We have a certain initial equity in an account, and that is a percentage of that equity that we do not wish to lose. If our equity gets to that percentage, we say we are “ruined.” Thus, we are discussing risk as “risk of ruin,” or RR. So if we say that if we drop to 60% of our initial equity that we are ruined, we can express this as RR (0.6).

For our coin-toss game, there are two scenarios (heads or tails), giving us two bins. In a single play, either of those two scenarios may occur. If we extend this out to two plays, then we look at four possible trails, ad infinitum. Because we know what we make or lose on each play, and we know what value we are using for f (0.25), we can inspect whether a given trail touches that lower barrier whereupon ruin is determined (0.6 of our starting equity, in this case). Because we know how many trails there are, and how many of those trails end up at ruin, we can determine the probability of ruin (see “Probability of ruin”).

The value for the probability of ruin increases as the number of plays (and hence trails for that number of plays) increases. Fortunately, the values are asymptotic — that is, it levels off but never really touches a certain value — and it is at this asymptotic value to which we say RR (0.6) is equal in the long run. In the case of our coin-toss game, trading at an f of 0.25, we find the probability of ever hitting 0.6 of our initial equity, in the long run, to be about 0.48. Thus, we have a risk of ruin of 48%, or RR (0.6,0.25) = 0.48.

This process can still be performed even if there is some serial relationship of the results. So, even if a tail were more likely than a head, following a tail, and vice versa, we still could determine our RR values.

THE DRAWDOWN CONNECTION

All this said, RR is not exactly the same thing as drawdown. Ruin is a static lower barrier. With drawdown, that barrier moves ever higher whenever a new equity peak is established. Initially, RR is a certain amount, b or RR(b),which equals the risk of drawdown (RD) to that amount, or RD(b). Yet, whenever there is a new equity high, RR(b) will remain the same, but RD(b) will increase proportionately to b multiplied by the new high equity.

For example, assume a fund of $1 million is established. At $600,000, the fund will be closed. If the fund proceeds to say, $2 million, the point of ruin remains $600,000. However, the risk of drawdown (expressed as 1.0 - b, or 1.0 – 0.6 = 0.4, in this case, or a 40% drawdown), equals the point of ruin initially. When the fund doubles, RD doubles as well, to $1.2 million.

The mathematics for converting RR to a certain barrier can be amended to accommodate RD to that same percentage barrier (see “At the limit”). However, this brings to light a disconcerting fact. The asymptote for RD(b), for any given b, approaches 1.0. In other words, in the long-run, for any portfolio, the probability of a drawdown of any magnitude, however large, approaches certainty. We can’t prevent it, but we can prepare for it.

These near-asymptotic levels are not reached immediately. Different values for b, different f sets, and different time horizons, will exhibit sub-asymptotic probabilities of particular drawdown amounts. Thus, it is entirely possible to determine the probability of a given drawdown not being exceeded, for a given f set (portfolio composition and weightings) over a given time window, be that a month, a quarter or longer.

Ultimately, our goal is to pare out those locations on the landscape where the probability of a drawdown of 1-b percent, or greater, exceeds what we will tolerate. This creates holes in the landscape, such that we now have a terrain that, in essence, eliminates locations where a given set of f values for the respective components yields a portfolio with too high a probability of seeing too large a drawdown (see “Holes in f”).

NEW WAY TO OPTIMAL

Because we cannot trade at f values that would place us in the visual crater on “Holes in f,” our process of finding the most optimal f must be altered. Instead, we need to find the highest value of f along the rim of the crater.

However, as is obvious in “Holes in f,” there are several points around the rim of the crater that have identical altitudes, as measured along the vertical axis. In such cases, we can employ a second criteria for optimality. For example, we can subjectively decide to select those combinations that are closest to the 0,0 point for the two f values. We can intuitively conclude that such a point would have less risk, all things equal, than other points on the rim of the volcano.

It’s clearer to view the image of the terrain, with the additional constraint of the bias toward the 0,0 point, from above (see “A new angle” ). The f pairs for our two components, the two coin-toss games, in this case, that fall within our risk constraint are all in the blue area of the chart. Notice how little space we have to work with here.

The exercise employed then, algorithmically, of finding the highest point, or points, in the N+1 dimensional landscape, is an iterative one. We have determined the following beforehand:

1. The greatest drawdown we are willing to endure, which equals 1-b (so, if we are willing to see no more than a 20% drawdown, we would determine b as 0.8).

2. An acceptable probability of seeing a 1-b drawdown, RD(b).

3. The time period we will examine for the points above.

The only thing left then is to determine those f values, of which there are N of them, that result in the highest point in the N+1 dimensional landscape after the landscape has been pared away by the three steps above.

The process should be performed using a genetic algorithm on candidate f value sets. This technique is ideal for such non-smooth surfaces as we will experience in paring the terrain for drawdown considerations.

For each f set, calculate A (the altitude in the N+1 dimensional landscape), then B (the point in the landscape compared to the drawdown constraints imposed). Failing that, a value of 0 is returned for that f set, or else the multiple on the stake for that f set is returned as the objective function value. The process continues until the conditions of the genetic algorithm are satisfied.

Putting it all together

The reasons to opt for The Leverage Space Model of portfolio construction:

1. Risk is defined as drawdown, not variance in returns.

2. The fallacy and danger of correlation is eliminated.

3. Valid for any distributional form - fat tails are addressed.

4. The Leverage Space Model is about leverage, which is not addressed in the traditional models.

Note how precious few combinations for our allocations exists in the graphic “A New Angle,” whereby one falls within what is considered safe.

Also notice the line between the 0,0 point and 1,1 point. MPT advocates you allocate at any point along this line, based on preference. Clearly, when superimposed over the Leverage Space Model, such a conclusion is not adequate to the modern leveraged trader.

Finally, in looking at this graphic, you can see how certain money management heuristics in trading have evolved, such as the “2% rule” (never risk more than 2% per trade). Clearly, the closer to 0,0 one is, the more likely they will find themselves in the safe zone. Yet, absent a framework for seeing the consequences of their actions, such actions result in returns reduced geometrically while drawdowns are reduced arithematically. These rules have evolved because we have been operating absent a framework for examining these aspects of trading.

We have seen how the question of quantity to assume for a given position is every bit as crucial to profit or loss as the timing of the position or the instrument chosen. For the latter, we have many tools to assist us in selecting what to buy or sell. Yet, the decision regarding quantity has heretofore existed in dark oblivion. Hopefully, you can see that the Leverage Space Model, aside from merely being a superior portfolio model, provides a paradigm where one had not existed — only the inexplicable, arbitrary decisions regarding quantity for lack of a paradigm, when it comes to the decision of “How much?”

Ralph Vince is a recognized expert in portfolio analysis. He can be reached at rvince99@hotmail.com . For a more technical discussion see “The Handbook of Portfolio Mathematics” (John Wiley & Sons, 2007).

Comments
comments powered by Disqus