Modern portfolio theory defines an investment strategy to meet a given investor’s needs. It is based on using simple statistics to design a mixture of low-correlated assets to minimize risk and maximize returns. However, things don’t always go as planned.

Different investment sectors become more correlated just at the time we need non-correlation the most. For example, utility, oil and technology stocks usually have a correlation between 0.2 and 0.4, but during market shocks and so-called “black swan” events, these correlations can reach 0.7, 0.8 or even higher.

So, just when you need the diversification most, classic portfolio theory can fail you. One reason is because its application methods are derived from theories that assume an infinite number of trials. In reality, we trade a particular size, method or mix of markets for a finite number of trials and move on. These techniques also assume the data have a normal distribution, which is not true. For example, events creating a standard deviation of 5 should happen once every 100 years with a true normal distribution, not every four to eight years as we have seen in practice.

Traders need an alternative. Fortunately, there is one. It is called the Leverage Space Model. Developed by Ralph Vince, who is known for introducing the trading community to optimal *f*, Leverage Space is still evolving and is an important area of research. (For an introduction to this concept, see Vince’s “Using Leverage Space to define risk,” March 2008.) When Leverage Space is fully mature, it could become one of the most important underpinnings of investment theory in the next decade and beyond.

Here, we examine the theoretical basis for Leverage Space and walk through some examples of its use. One word of caution: Advancing our understanding into this frontier requires a strong foundation. A basic knowledge of Vince’s concepts is assumed (see link, above).

**Is this a good game?**

Assume we have a game in which 49 times out of 50 you win $1, and one time out of 50 you lose $50. This is a negative expectation game. If we use classic probability analysis, assuming an infinite number of trials, our calculation is:

1 * 0.98 + –50 * 0.02 = –0.02

Classic probability theory says do not play this game, because it has a negative expectation. This is a losing sum game if we play to infinity. But in reality, whether it is a good game depends on how long you play it, and how you play it while you do. Before we look at different ways to play this game, we need to review some basics of probability theory.

First, let’s look at how we calculate the probability of “n” consecutive events occurring. With a fair coin toss, we have a 50/50 chance on heads or tails. In two consecutive tosses, we have four possible cases: HH, HT, TH, TT. Only one offers HH, or one out of four cases, so the chance of two consecutive heads is 25%.

Now, consider the possible combinations for three consecutive tosses: HHH, HTH, THH, TTH, HHT, HTT, THT, TTT. This gives us eight combinations, and HHH has a chance of one out of eight, or 12.5%.

To know the probability of a single event and how many times it will occur in a row, we simply multiply the probability of a single event, and raise it to a power equal to consecutive events. For example, the chance our coin will come up two heads in a row is 0.50 squared, or 0.25 (25%). Three in a row is 0.50 cubed, or 0.125 (12.5%).

Returning to our initial example, our probability of winning is 98%. The chance of winning two in a row is 96.04%. The chance of winning 10 in a row is 81.7%. This means that if we play the game a fixed number of trials and stop, we have an 81.7% chance of winning $10 and an 18.3% chance of losing $50.

Now, let’s redefine our game. We will allow reinvestment and stop after our first loss. We start with $1,000 as our first bet. After 10 consecutive wins, each of which doubles our stake, our initial $1,000 becomes $1,024,000. This has an 81.7% chance of occurring.

How much we win — and then lose with the first loss — depends on when the first loss occurs, shutting down our game. But the worst case would be losing $25,600,000 (based on a $1,000 starting stake and reinvestment). When we lose, we go bankrupt. However, up to that point, we have a fairly strong chance of living the good life.

So, is this a good game? From Long-Term Capital Management to the companies behind the mortgage crisis, many financial companies have thought so. The chance of their black swan event may have been less than one in 50, and they certainly bet less than their entire stake with each trade, but the rules of their games were similar. (Well, there was one key difference — many of these firms played with the safety net of a government bailout.)