From the August 01, 2009 issue of Futures Magazine • Subscribe!

Fifteen years and counting

INTERMARKET ANALYSIS

Intermarket analysis is a powerful trading methodology and predictive form of analysis. Traditionally, it was done using charts only. In 1995, we developed a method called intermarket divergence that allowed us to objectively test this concept. An example of trading system logic based on this is shown in “Coding
intermarket.”

In April 1998, we used this strategy on Treasury bonds and silver. A 14-day moving average was used for T-bonds, and a 26-day moving average was used for silver. Since April 1, 1998, this system is up a little over $15,000. Although not a great number, it’s still profitable. Most performance problems have come in the past two years. The system lost almost $27,000 during 2007-08 using silver as an intermarket for T-bonds.

Another market used to predict T-bonds was utility stocks. Originally, the New York Stock Exchange (NYSE) utility average (NNA) was used. However, this stopped trading a few years ago. UTY still exists, though. We can update the performance of the positive correlation intermarket divergence model using the eight-period moving average for T-bonds from the original article and applying the 16-period moving average developed for NNA to the UTY. The results are shown in “Utility works.”

One of the most important articles on this topic was published in February 1996. It filled a hole in this field of study regarding relationship de-coupling, such as the bad years between silver and bonds in 2007-08. This article presented several important concepts on how to use correlation to filter out trades. An example would be to, say, switch off the system if the 20-day correlation between silver and bonds rose to above 0.2 because the correlation should be negative.

Intermarket analysis remains a ripe area of study and will continue to be core to many strategies that are developed and published.

ADVANCED TECHNOLOGIES

In April 1994, Technology & Trading published its first article using neural networks. It produced a non-lag filter using a simple feed-forward network with, for example, 10 inputs, four hidden nodes and 10 outputs. The output was the signal-with-noise filter and no lag because the hidden nodes compressed the signal and reproduced it in the output.

“Nothing Like Net for Intermarket Analysis” (May 1995) was one of the most popular articles on intermarket divergence that we’ve written. It discussed the steps involved in developing a neural-network-based trading system. The article exposed the “how to” of neural networks and made the point that neural networks are not magic; they are at their core just fancy indicators or filters. That doesn’t mean they aren’t useful. A properly used neural network can improve a system’s performance in real time by 20%-60%.

Admittedly, in those days I did think you could improve performance by up to 300%, but I now realize that such an increase is not robust nor does it hold up over time. Still, 60% or even 20% makes the effort more than worthwhile.

For many, neural networks have fallen off their radar. One reason neural nets may not be the Wall Street darlings they once were is when you train them, they start each session with random connection weights. You get different results each time. This is a tough concept to grasp that leaves many without a non-technical background looking for another way. What many neural network developers do to solve this is to average the output across 10 different networks using the same inputs. This is a clunky solution, at best.

A better method is to use kernel regression. This is a deterministic approach that will always produce the same results given the same inputs. Performance is about the same as neural networks. Kernel regression can map functions roughly equally to neural nets, and kernel regression is better at pattern recognition.

Genetic algorithm analysis was another area where we made strides. We first covered them in July 1995. Back then, the only general genetic algorithm software was Evolver, marketed by AXCELIS. There was no trading-specific tool and the market-timing application of genetic algorithms was confined to academics. That first story attempted to break that mold and explained the general concepts of genetic algorithms — how they are advanced optimizers that can find optimized solutions much faster than brute-force methods. This also led into a discussion of evolving trading rules.

The key in genetic algorithms is to encode the rules that you are trying to evolve into the genetic string. We used different evolutionary fitness scores to combat “inbreeding,” which causes solutions to be caught in local minimums. We also developed various trading-specific measures that ultimately proved more successful for our particular application later in the evolutionary process.

Today, genetic optimization is common place in many applications. A new development is genetic programming. Mike Barna has been a pioneer in this area. Because genetic programming can create rules and combine indicators that are not intuitive, you can create systems that an expert in the field never would have considered. While domain expertise may still hold an edge in many ways, genetic programming remains at the frontier of this area of research.

<< Page 2 of 3 >>
Comments
comments powered by Disqus