From the June 01, 2012 issue of Futures Magazine • Subscribe!

# Managing portfolio risk with Leverage Space

Modern portfolio theory defines an investment strategy to meet a given investor’s needs. It is based on using simple statistics to design a mixture of low-correlated assets to minimize risk and maximize returns. However, things don’t always go as planned.

Different investment sectors become more correlated just at the time we need non-correlation the most. For example, utility, oil and technology stocks usually have a correlation between 0.2 and 0.4, but during market shocks and so-called “black swan” events, these correlations can reach 0.7, 0.8 or even higher.

So, just when you need the diversification most, classic portfolio theory can fail you. One reason is because its application methods are derived from theories that assume an infinite number of trials. In reality, we trade a particular size, method or mix of markets for a finite number of trials and move on. These techniques also assume the data have a normal distribution, which is not true. For example, events creating a standard deviation of 5 should happen once every 100 years with a true normal distribution, not every four to eight years as we have seen in practice.

Traders need an alternative. Fortunately, there is one. It is called the Leverage Space Model. Developed by Ralph Vince, who is known for introducing the trading community to optimal f, Leverage Space is still evolving and is an important area of research. (For an introduction to this concept, see Vince’s “Using Leverage Space to define risk,” March 2008.) When Leverage Space is fully mature, it could become one of the most important underpinnings of investment theory in the next decade and beyond.

Here, we examine the theoretical basis for Leverage Space and walk through some examples of its use. One word of caution: Advancing our understanding into this frontier requires a strong foundation. A basic knowledge of Vince’s concepts is assumed (see link, above).

Is this a good game?

Assume we have a game in which 49 times out of 50 you win \$1, and one time out of 50 you lose \$50. This is a negative expectation game. If we use classic probability analysis, assuming an infinite number of trials, our calculation is:

1 * 0.98 + –50 * 0.02 = –0.02

Classic probability theory says do not play this game, because it has a negative expectation. This is a losing sum game if we play to infinity. But in reality, whether it is a good game depends on how long you play it, and how you play it while you do. Before we look at different ways to play this game, we need to review some basics of probability theory.

First, let’s look at how we calculate the probability of “n” consecutive events occurring. With a fair coin toss, we have a 50/50 chance on heads or tails. In two consecutive tosses, we have four possible cases: HH, HT, TH, TT. Only one offers HH, or one out of four cases, so the chance of two consecutive heads is 25%.

Now, consider the possible combinations for three consecutive tosses: HHH, HTH, THH, TTH, HHT, HTT, THT, TTT. This gives us eight combinations, and HHH has a chance of one out of eight, or 12.5%.

To know the probability of a single event and how many times it will occur in a row, we simply multiply the probability of a single event, and raise it to a power equal to consecutive events. For example, the chance our coin will come up two heads in a row is 0.50 squared, or 0.25 (25%). Three in a row is 0.50 cubed, or 0.125 (12.5%).

Returning to our initial example, our probability of winning is 98%. The chance of winning two in a row is 96.04%. The chance of winning 10 in a row is 81.7%. This means that if we play the game a fixed number of trials and stop, we have an 81.7% chance of winning \$10 and an 18.3% chance of losing \$50.

Now, let’s redefine our game. We will allow reinvestment and stop after our first loss. We start with \$1,000 as our first bet. After 10 consecutive wins, each of which doubles our stake, our initial \$1,000 becomes \$1,024,000. This has an 81.7% chance of occurring.

How much we win — and then lose with the first loss — depends on when the first loss occurs, shutting down our game. But the worst case would be losing \$25,600,000 (based on a \$1,000 starting stake and reinvestment). When we lose, we go bankrupt. However, up to that point, we have a fairly strong chance of living the good life.

So, is this a good game? From Long-Term Capital Management to the companies behind the mortgage crisis, many financial companies have thought so. The chance of their black swan event may have been less than one in 50, and they certainly bet less than their entire stake with each trade, but the rules of their games were similar. (Well, there was one key difference — many of these firms played with the safety net of a government bailout.)

Real-world analysis

Real trading is based on a stream of returns, continuous numbers, not coin flips. We manage these streams by building a probability matrix. The best way to do this is to bin our data.

First, we calculate the range of the data and create bins. We then calculate our joint probability tables. We will use the equity curves of three different systems. “Data bins” (below) is an example from chapter four of Vince’s book, “The Leverage Space Trading Model.”

The first step in building a joint probability table is to process the equity data into differences. Next, we break the period difference into bins. The max, min and range are needed to do this:

 MarketSysA MarketSysB MarketSysC Max 136 448 799 Min –108 –735 –393 Range 244 1183 1192

We decide to make five bins for each system’s results. The bins do not have to be equally spaced, but we will do so to simplify our example. Here are the bins:

 MarketSysA < –\$108.00 –\$108.00 –\$26.67 –\$26.67 \$54.67 \$54.67 \$136.00 \$136.00 >

 MarketSysB < –\$735.00 –\$735.00 –\$340.67 –\$340.67 \$53.67 \$53.67 \$448.00 \$448.00 >

 MarketSysC < –\$393.00 –\$393.00 \$4.33 \$4.33 \$401.67 \$401.67 \$799.00 \$799.00 >

Each bin is represented by its mid-point. We then record the number of actual occurrences for each of the combinations between the three systems. The actual number of records and the number of occurrences are used to calculate the probability of each combination.

In real data sets, over longer holding periods such as monthly or yearly, we often have many combinations without any occurrences. We can address this by adding additional data or by replacing some of the lower performing cases with worst-case scenario, black swan scenarios. We also must figure how many holding periods these black swans should last so we can create multiple records to simulate a real event. If we test over 10 to 20 years of data, it’s important to do this because there likely will be a black swan case in the data set.

In our example, we have 13 data records, but 125 combinations across the five bins. This gives most individual records very small probability. Our next step is to condense the table to only combinations with supporting cases (see “Joint scenarios”).

Because many of our possible cases did not occur in our data set, we remove these cases from the joint scenarios table. So, our table has been pared down to only 12 rows, not our original 125 possible combinations. We then set n = 12 for calculation purposes. At this point, we have all of the information we need to perform the Leverage Space calculations.

Calculating Leverage Space

So, we have three systems (N = 3) and n = 12. We want to determine our holding period return, or HPR, for a given set of f values — of which there are N, or 3 — so we seek the maximum geometric mean HPR, or GHPR( f1, f2, f3).

We could solve, say, for all values of f1, f2 and f3 and plot out the N + 1 – dimensional surface of leverage space (in this case, a four-dimensional surface), or we could apply an optimization algorithm, such as the genetic algorithm, to seek the maximum “altitude” of the curve. We will focus on the application part of the process that isn’t covered in more generalized texts on mathematical optimization. Although this discussion calls on some equations that may be intimidating for those whose algebra is a bit rusty, “Seeking HPRs” right, summarizes the calculations.

Notice that to determine the GHPR( f1, f2, f3), we must find nHPR( f1, f2, f3), or:

In other words, we go through each row in the joint probabilities table, calling each row “k,” and determine an HPR(k, f1, f2, f3) for each row as follows:

Notice that inside the HPR( f1 · · · fN)k formula, there is the iteration through each column, each of the N market systems, of which we discern the sum:

Assume we are solving for the f values of 0.1, 0.4 and 0.25 for MarketSysA, MarketSysB and MarketSysC, respectively. We would figure our HPR(0.1, 0.4, 0.25) at each row in our joint probabilities table, each k, as shown in the f columns of “Seeking HPRs.” By adding 1.00 to each of the f columns, we obtain the HPRs. We can sum these for each row, and obtain a net HPR for that row.

This example uses fixed f values for each system. In reality, we would optimize the f value based on given constraints and time horizons. Then, we would select those that meet our constraints.

A common set of constraints is to limit drawdowns. The joint probability tables with a given time horizon are used to calculate this. Say we want to find the optimal f and f dollar values to limit drawdown to 20% and have that occur less than 20% of the time. We will perform multiple horizon analysis, as we did for the multiple coin toss game, and calculate returns. We cap our total wealth relative (TWR) at 1.00. (This is our final stake after compounding.) We then look at HPR for the next time horizon. The difference between the TWRs of the previous and current horizons is the run up/down. For example, if TWR is 0.80 for the next horizon, that implies a 20% drawdown.

The first example is based on 30 time horizons. The first column is the optimal f value. The second column is the f dollar value, or how many dollars of equity we require to trade one unit. The higher the f dollar value, the more conservative the money management.

 Optimal f One unit per MktSysA 0.24 \$446 MktSysB 0.024 \$31,197 MktSysC 0.217 \$1,814

The f values change if we use a different time horizon. Here are the results for 10:

 Optimal f One unit per MktSysA 0.25 \$432 MktSysB 0.003 \$270,612 MktSysC 0.228 \$1,729

Using a 10-period time horizon, we can see that we almost almost eliminate MarketSysB. We go from trading one unit per about \$31,000 in the account to one unit per \$270,000 in the account.

Here are the results for a time horizon of 100:

 Optimal f One unit per MktSysA 0.18 \$601 MktSysB 0.007 \$104,630 MktSysC 0.125 \$3,146

Now, with a time horizon of 1,000, we get the following:

 Optimal f One unit per MktSysA 0.119 \$905 MktSysB 0.031 \$23,481 MktSysC 0.111 \$3,539

From the previous results, we can see that the time horizon is significant and can affect each system in different ways. This is important as we advance our understanding of Leverage Space.

Robust method

One problem with optimal f in application is its value changes over time, and we can’t know the future value of f.

However, we can estimate it so that we minimize the downside of missing the mark. Optimal f is bound between 0 and p, where p is the percentage of winning trades’ positive expectations. If we base our estimate of f for Leverage Space on p, we just need to worry about how stable p is to use as an estimate.

Because f can be between 0 and p, the mid point is p/2. As our trading time horizon gets longer, the cost of errors increases. Using p/2 minimizes the cost of error when f is 0 or when f='p ('p is the future winning percentage). It minimizes the outliers for f, which reduces the statistical variance considerably. This gives us a quick way to estimate optimal f, without use of the joint probabilities table or genetic algorithms.

The robust method builds on this concept as another effort to trade on the conservative side of optimal f to handle the worst case with minimum penalty. It is designed to handle black swan events even when none of them might be in the data set. In this approach, we divide each component 'p/2 by N, which is the same as multiplying it by 1/N. Thus, our f best guess is:

f = 'p/2 * W
Where W = 1/ N

All of the Ws for the components sum to 1.00. (You could use any scheme you wanted on the Ws, as long as they all sum to 1.00.) That’s because we’re assuming the worst case: That the correlations of all of the markets in our portfolio go to 1.00.

We then can calculate the f dollar values by simply dividing the absolute value of the largest losing trade by f. Because we can’t be sure that the current largest losing trade will be the largest loser over our time horizon, we can apply a volatility measure (such as some multiple of average true range) and increase it accordingly.

If correlation goes to +1.00, we need to scale by the number of components. For example, if the optimal f on one component — say a two-to-one coin toss — is 0.25, and we are going to play two of these, when r=0, the peak is 0.23, 0.23. However, if r=1, the peak is 0.125, 0.125. So, if we take the optimal for a market system, traded alone, and divide it by the number of market systems, we determine the peak of the landscape when r=1 between all components. “Programming leverage space” (next page) walks through the automation of this process in a programming environment.

Leverage Space and its potential in applied money management is just dawning on the trading community. This approach is an exciting area of research, and will remain so for years, providing a rich source of ideas for future advancements in trading technology.

Programming leverage space

We can program the robust method for Leverage Space. We'll show an example using TradersStudio code on a simple channel breakout trading system. The system code is below.

Sub CHANBREAKOUTLS (SLen)

Dim MinMove

MinMove=GetActiveMinMove()

Sell("ChanSell",1,Lowest(Low,SLen,0)-MinMove,Stop,Day)

VirtualSell("ChanSell",1,Lowest(Low,SLen,0)-MinMove,Stop,Day)

End Sub

Next, we have our Leverage Space function, which calculates the f dollar value:

Function LeverRobustDollars(mkt As TSProcessor.IMarket,Sess as TSProcessor.ISession)

Dim LLT

Dim WinPer

Dim A As Array

WinPer=Max(mkt.VirPctProfit,30)

A=mkt.DataArray(0,”Range”)

LLT=Min(mkt.VirLargestLoss,-3*Average(A,40,0)*mkt.BigPointValue)

If WinPer<>0 Then

LeverRobustDollars=(LLT*-1)/(((WinPer/100)/2)/Sess.MarketCount)

End If

End Function

We have done a few interesting things in this function, insofar that it’s not a pure robust method. First, we make a minimum winning percentage of 30%. This way, we actually trade in cases when we start without any winning trades. Next, we adjust the largest losing trade larger if it’s not at least three times the average range. This helps us avoid problems caused by curve fitting.

Finally, we have a full trade plan. We have a start date for trading (“Mult”), which allows us to adjust the dollar value of f uniformly for all markets. This comes in handy if, for example, we have a return of 30% and a drawdown of 10% and we would like to allow a 20% drawdown. We can use the Mult input to adjust that leverage. Mult2 allows us to remove the P/L from open trades if we set it to 0; if we set Mult2 to 1, it includes this profit.

Other adjustments to this trade plan calculate the dollar value of f for long and short trades separately.

Sub LS_Test1(SDate,Mult,Mult2)

Dim M As Integer

Dim S As Integer

Dim DollarsPerMarket

Dim StartAccount

Dim DynMargin

Dim LSDollar

Dim ActiveCount

Dim Curdate

For M = 0 To TradePlan.Session(0).MarketCount - 1

If DollarsPerMarket> LSDollar And Curdate>=MigrateDate(SDate) Then

If (DollarsPerMarket/LSDollar)>=1 Then

End If

Else

End If

Next

End Sub

The resources below offer a wealth of knowledge regarding Leverage Space and optimal f.

 Leverage Space •  “Risk-Opportunity Analysis,” by Ralph Vince (available on Amazon.com in 2012) •  “The Leverage Space Trading Model,” by Ralph Vince (John Wiley & Sons, 2009) •  Dow Jones offers indexes based on Leverage Space:  http://www.djindexes.com/positionsizing R Open Source •  Theoretical paper:  http://cran.r-project.org/doc/contrib/Paradis-rdebuts_en.pdf •  R homepage:  http://www.r-project.org •  Comprehensive R Archive Network (CRAN) mirrors:  http://cran.r-project.org/mirrors.html •  User-contributed Documentation:  http://www.r-project.org/other-docs.html Leverage Space in R Open Source •  The R-forge page:  http://lspm.r-forge.r-project.org/ •  LSPM posts:  http://blog.fosstrading.com/search/label/LSPM •  Joshua Ulrich:  http://www.joshuaulrich.com

Thanks to Ralph Vince, Joshua Ulrich and Brady Preston for their contributions to this article.

Page 1 of 6