Like most forays into the market, this author’s started unexpectedly. It began with a company called Promise Land Technologies and a general purpose neural network plug-in for Microsoft Excel called Braincel. The patented technology was successful, and there were many clients, from manufacturing to the U.S. Trotting Association, but the most common by far were trading related. About 40%-50% of customers bought Braincel to predict the markets.
This venture followed on the heels of working for a jet engine components company performing fail mode and effect analysis: a study of component failure, how to prevent failure and what could result to the whole in the case of single part failure.
This experience, and an undergraduate degree in physics, provided a decent head start in a trading career, with these lessons fully set in place:
• Classic signal processing methods do not work without domain expertise.
• Many classic trading rules published in most books do not work. Period.
• It is vital to understand what each element of a trading system does and what happens if it fails.
After spending 1992 through 1994 studying and backtesting the markets, and using Excel for the heavy lifting — a process that inspired a seven-part series in Futures in 1999 on developing trading systems in spreadsheets — efforts were directed toward integrating neural networks into a complete trading system design that also involved genetic algorithms, classic rule-based systems and domain expertise. This complete approach was further bolstered by the failed efforts of many major investment banks at the time that were relying almost exclusively on neural networks to develop trading strategies, treating neural nets like the magic wands they could never be.
Nevertheless, even through the smoke from the billions of dollars that investment banks burned through to compute their way to the Holy Grail, the promise of advanced technologies was clear. Used carefully, these tools could make traders’ lives easier and more profitable. In the last 15 years, more than 120 articles in this space have been dedicated to that goal.
Some of these articles have been more useful than others. Here we review the best of them and see how they have evolved over the years.
SYSTEM TESTING & EVALUATION
If there’s one topic that hasn’t grown stale over the last 15 years, it’s system testing and evaluation. A recurring theme has been how to create kind of a system DNA — how to view a system’s statistical properties as if it were the system’s genetic makeup.
For example, if your system has 40% winning trades over its history and a 3:1 win/loss ratio and these patterns change drastically either for the good or bad, then it’s a warning sign. Specifically, if a trend-following system wins 10 trades in a row, it might be time to stop trading it for a while, because it does not match the system’s normal pattern. Monitoring and comparing the statistical distribution of live results to historical tests is vital. These concepts were discussed as early as March and April 1995.
Another lesson we’ve learned is that optimization is an important step. First, we must optimize across a broad range of parameters and average key statistics, such as average trade, net profit and drawdown. These must be analyzed with respect to volatility measures. The standard of requiring net profit to be positive after subtracting two times the standard deviation remains valid today.
In October 1995, we discussed using statistical analysis to develop better exits. We analyzed maximum adverse excursion for a given bar after entry and final trade profit using a scatter chart. We can use this to find an area where a trade loses more than the adverse excursion.
Intermarket analysis is a powerful trading methodology and predictive form of analysis. Traditionally, it was done using charts only. In 1995, we developed a method called intermarket divergence that allowed us to objectively test this concept. An example of trading system logic based on this is shown in “Coding
In April 1998, we used this strategy on Treasury bonds and silver. A 14-day moving average was used for T-bonds, and a 26-day moving average was used for silver. Since April 1, 1998, this system is up a little over $15,000. Although not a great number, it’s still profitable. Most performance problems have come in the past two years. The system lost almost $27,000 during 2007-08 using silver as an intermarket for T-bonds.
Another market used to predict T-bonds was utility stocks. Originally, the New York Stock Exchange (NYSE) utility average (NNA) was used. However, this stopped trading a few years ago. UTY still exists, though. We can update the performance of the positive correlation intermarket divergence model using the eight-period moving average for T-bonds from the original article and applying the 16-period moving average developed for NNA to the UTY. The results are shown in “Utility works.”
One of the most important articles on this topic was published in February 1996. It filled a hole in this field of study regarding relationship de-coupling, such as the bad years between silver and bonds in 2007-08. This article presented several important concepts on how to use correlation to filter out trades. An example would be to, say, switch off the system if the 20-day correlation between silver and bonds rose to above 0.2 because the correlation should be negative.
Intermarket analysis remains a ripe area of study and will continue to be core to many strategies that are developed and published.
In April 1994, Technology & Trading published its first article using neural networks. It produced a non-lag filter using a simple feed-forward network with, for example, 10 inputs, four hidden nodes and 10 outputs. The output was the signal-with-noise filter and no lag because the hidden nodes compressed the signal and reproduced it in the output.
“Nothing Like Net for Intermarket Analysis” (May 1995) was one of the most popular articles on intermarket divergence that we’ve written. It discussed the steps involved in developing a neural-network-based trading system. The article exposed the “how to” of neural networks and made the point that neural networks are not magic; they are at their core just fancy indicators or filters. That doesn’t mean they aren’t useful. A properly used neural network can improve a system’s performance in real time by 20%-60%.
Admittedly, in those days I did think you could improve performance by up to 300%, but I now realize that such an increase is not robust nor does it hold up over time. Still, 60% or even 20% makes the effort more than worthwhile.
For many, neural networks have fallen off their radar. One reason neural nets may not be the Wall Street darlings they once were is when you train them, they start each session with random connection weights. You get different results each time. This is a tough concept to grasp that leaves many without a non-technical background looking for another way. What many neural network developers do to solve this is to average the output across 10 different networks using the same inputs. This is a clunky solution, at best.
A better method is to use kernel regression. This is a deterministic approach that will always produce the same results given the same inputs. Performance is about the same as neural networks. Kernel regression can map functions roughly equally to neural nets, and kernel regression is better at pattern recognition.
Genetic algorithm analysis was another area where we made strides. We first covered them in July 1995. Back then, the only general genetic algorithm software was Evolver, marketed by AXCELIS. There was no trading-specific tool and the market-timing application of genetic algorithms was confined to academics. That first story attempted to break that mold and explained the general concepts of genetic algorithms — how they are advanced optimizers that can find optimized solutions much faster than brute-force methods. This also led into a discussion of evolving trading rules.
The key in genetic algorithms is to encode the rules that you are trying to evolve into the genetic string. We used different evolutionary fitness scores to combat “inbreeding,” which causes solutions to be caught in local minimums. We also developed various trading-specific measures that ultimately proved more successful for our particular application later in the evolutionary process.
Today, genetic optimization is common place in many applications. A new development is genetic programming. Mike Barna has been a pioneer in this area. Because genetic programming can create rules and combine indicators that are not intuitive, you can create systems that an expert in the field never would have considered. While domain expertise may still hold an edge in many ways, genetic programming remains at the frontier of this area of research.
One particularly innovative approach we published was the concept of the adaptive channel breakout, which first appeared in the January 1996 issue. The goal of this approach was to change the channel length for a channel breakout trading strategy over time depending on shifts in key statistics.
One advance over the years used cycle calculations and smoothing. This technology originally relied on maximum entropy spectra analysis to find the cycles. This algorithm is similar to the concepts used in John Ehlers’ MESA. Ehlers published the Hilbert transform, which calculates the dominant cycle and produces a smooth cycle curve, in 1999. Hilbert limits many of the technical parameters involved in the MEM algorithm, but it does allow this technology to be used by non-engineering types, which means more traders can use this concept.
While adaptive components are potentially very powerful, they don’t work so cleanly with some trading systems. In this case, we want to use walk-forward analysis. Walk-forward testing optimizes a system’s parameters on a given window (say, 1,000 bars). It then runs the system for the next 250 bars, for example. The oldest 250 bars will be dropped off and the system will be re-optimized on this new 1,000-bar window, and so on.
Walk-forward testing works well, but one issue is how you select the best set of parameters and how you deal with two similar sets of parameters based on performance that don’t produce similar trades. Judging how robust the parameters are when they are always changing also is a challenge for walk-forward designs.
A new area of research in this area involves optimizing markets in a portfolio independently and analyzing the optimization surface in N dimensions. N is the number of parameters, plus a performance one. This algorithm also looks at past windows and only changes parameters when a new top set has been developed, not just a new set of parameters that performs better but not significantly so.
For a trading system developer, the best outgrowth of publishing your work is it always pushes you to expand. Sometimes you force yourself into a new topic — writers in all fields almost certainly experience this — but rarely do you come away from these excursions with regret. For those who heed the call, this can be the most rewarding piece of the analytical puzzle.
Murray A. Ruggiero Jr. is a consultant. His firm, Ruggiero Associates, develops market timing systems. He is the author of "Cybernetic Trading Strategies" (Wiley). E-mail him at email@example.com.