Sortino ratio: A better measure of risk

Many traders and investment managers want to measure and compare commodity trading advisors (CTAs) or trading systems. While there are many ways to measure an investment’s performance, risk-adjusted returns are one of the most important measures to consider because, given the inherent free leverage of the futures markets, more return can be earned by taking more risk. The most popular measure of risk-adjusted performance is the Sharpe ratio. While the Sharpe ratio definitely is the most widely used, it is not without its issues and limitations. Because of the way the Sharpe ratio is calculated, it tends to punish upside volatility in a trading program. We believe the Sortino ratio improves on the Sharpe ratio in a few areas. The purpose of this article is to review the Sortino ratio’s definition and present how to calculate it properly, because we have seen its calculation often done incorrectly.

Sharpe ratio

The Sharpe ratio is a metric that aims to measure the desirability of an investment by dividing the average period return in excess of the risk-free rate by the standard deviation of the return generating process. Devised in 1966 by Stanford Finance Professor William F. Sharpe as a measure of performance for mutual funds, it undoubtedly has some value as a measure of investment “quality,” but it has a few limitations also. 

The most glaring flaw is that it does not distinguish between upside and downside volatility (see “Good news, bad news,” below). In fact, high outlier returns can have the effect of increasing the value of the denominator (standard deviation) more than the value of the numerator, thereby lowering the value of the ratio. For some positively skewed return distributions such as that of a typical trend-following CTA strategy, the Sharpe ratio can be increased by removing the largest positive returns. This is nonsensical because investors generally welcome large positive returns. 

Additionally, to the extent that the distribution of returns is non-normal, the Sharpe ratio falls short. It is a particularly poor performance metric when comparing positively skewed strategies like trend-following to negatively skewed strategies like option selling (see “Bigger winners vs. more winners,” below). In fact, for positively skewed return distributions, performance actually is achieved with less risk than the Sharpe ratio suggests. Conversely, standard deviation understates risk for negatively skewed return distributions, i.e., the strategy actually is more risky than the Sharpe ratio suggests. Typical long-term, trend-following CTAs, especially those with longer track records, generally have Sharpe ratios in the 0.50 – 0.90 range. However, negatively skewed programs (convergent strategies) like option writing will produce high Sharpe ratios, 3.0 and above, up until a devastating drawdown. The Sharpe ratio often misses the inherent risk of convergent strategies.

Sortino ratio

In many ways, the Sortino ratio is a better choice, especially when measuring and comparing the performance of managers whose programs exhibit positive skew in their return distributions. The Sortino ratio is a modification of the Sharpe ratio, using downside deviation rather than standard deviation as the measure of risk — i.e., only those returns falling below a user-specified target (“Desired Target Return”) or required rate of return are considered risky (see “Good news, bad news”).

It is interesting to note that even Nobel laureate Harry Markowitz, when he developed Modern Portfolio Theory (MPT) in 1959, recognized that because only downside deviation is relevant to investors, using it to measure risk would be more appropriate than using standard deviation. However, he used variance (the square of standard deviation) in his MPT work because optimizations using downside deviation were computationally impractical at the time.

In the early 1980s, Dr. Frank Sortino had undertaken research to come up with an improved measure for risk-adjusted returns. According to Sortino, it was Brian Rom’s idea at Investment Technologies to call the new measure the Sortino ratio. The first reference to the ratio was in Financial Executive Magazine (August 1980) and the first calculation was published in a series of articles in the Journal of Risk Management (September 1981).

The Sortino ratio, S, is defined as:

 

where

  • R is the average period return;
  • T is the target or required rate of return for the investment strategy under consideration (originally T was known as the minimum acceptable return, or MAR. In his more recent work, MAR is now referred to as the Desired Target Return).
  • TDD is the target downside deviation.

The target downside deviation is defined as the root-mean-square, or RMS, of the deviations of the realized return’s underperformance from the target return where all returns above the target return are treated as underperformance of 0. Mathematically:

Target Downside Deviation = 

where

Xi = ith return
N = total number of returns
T = target return

The equation for TDD is very similar to the definition of standard deviation:

Standard Deviation = 

where

Xi = ith return
N = total number of returns
u = average of all Xi returns. 

The differences are:

  1. In the target downside deviation calculation, the deviations of Xi from the user selectable target return are measured, whereas in the Standard Deviation calculation, the deviations of Xi from the average of all Xi is measured.
  2. In the target downside deviation calculation, all Xi above the target return are set to zero, but these zeros still are included in the summation. The calculation for Standard Deviation has no Min() function.

Standard deviation is a measure of dispersion of data around its mean, both above and below. Target downside deviation is a measure of dispersion of data below some user-selectable target return with all above target returns treated as underperformance of zero. Big difference.

Page 1 of 2
About the Author