Over the past 16 years, this magazine has covered topics such as neural networks, data mining, adaptive systems, genetic algorithms, equity curve analysis, portfolio-based strategies and more. Still, most traders gravitate toward simple systems. This hasn’t necessarily been a bad thing; however, in recent years the markets have capitulated toward electronic trade matching, and what once worked no longer does.
Electronic markets are different from traditional pit-based markets, and the simple systems are not so effective anymore. Markets that used to open when traders filed in every morning now "open" in the middle of the night, such as the Intercontinental Exchange’s 1 a.m. start. In addition, the information and price flow between the United States and Europe is now fluid and instantaneous. Today’s markets never sleep.
Systems traders have been most impacted by this evolution. The long-term backtest is almost invalid. Electronic data and older pit-traded data no longer are comparable with today’s prices in significant details. Some data vendors attempt a solution that combines old pit data with the after-hours electronic markets, but this only produces noisy and thinly traded highs and lows.
It’s necessary to take a closer look at how these changes in markets, data and technology affect today’s traders. First, we’ll look at popular strategies that no longer work. Then, we’ll overview some areas of analysis that may be ripe for further study. We also will discuss the need for smarter systems as the markets evolve, and will revisit some tools that now have gained new importance.
Goodbye, old friend
The opens in today’s markets are arbitrary. Currently, the only markets that have tradable pit markets are the S&P 500 and Nasdaq. For them, we can use pit data to trade our classic open-range breakout in the electronic markets. These markets are the exception, however.
One solution to this issue is to create an artificial day session from electronic 24-hour data. The old pit times will have some significance for a while, but that will fade over time. The stock indexes are different because the pits are still active enough to be relevant, and the pit and electronic markets can’t get too far out of sync because of arbitrage.
An alternative is to trade a breakout of the previous session’s close. This works in some markets, but not all, and is not as reliable as the opening-range breakout. Another solution is to create a balance point, or an artificial open using recent data and then trade a breakout of that. The problem with this is because all markets are at a different level of maturity in terms of 24-hour volume, it’s necessary to use an adaptive model based on some type of statistical analysis for each market. Also, these 24-hour markets only have been liquid for a few years, limiting our history.
Of course, electronic markets have their benefits. Slippage is far less than it was in pit-traded markets. Commissions are much cheaper. Limit orders get filled more often if we touch the price. In the old days, a system with $100-$125 in per-trade profit before slippage and commission was un-tradable. Today, it is a potentially viable approach.
We also can record market orders as close limits. In electronic markets, you can change market orders that enter at the next day’s open to limit orders based on the most recent close. This solves the problem of junk open prices, and only rarely will a trade get missed when you don’t trade back to that previous close. Numerous bias patterns that we filter to create systems now can be traded with more cases and fewer parameters. Many of these types of patterns will perform well, as they have not been over-exploited because most traders found them unusable until recently.
The range in electronic markets is wider and noisier because markets are open for more hours, and many of these hours have low volume. The result is highs and lows that would not have happened if markets were not open 24 hours. The trading ladder also exacerbates this issue. If you see at 2 a.m. that you have one order at an expected price and nothing else, you could be an order away from the market and hope someone else uses a market order. This is what makes 24-hour markets different.
Cycle analysis, which measures the overall flow of the market and is less dependent on traditional definitions of the open, high, low and close, is an area that should work well with modern markets. One cycle-based strategy that this magazine first covered in 1996 was the adaptive channel breakout. This method set the channel lengths in a channel breakout system as a percentage of the dominant cycle. Using multipliers of the dominant cycles produced a much better optimization space than simply optimizing lengths.
The best-known method to extract cycle information from data in general is Fourier analysis, which does not work well on financial data because the method requires a long stationary period. Maximum Entropy Special Analysis, or MEM, is an auto-aggressive technique to fit data to a series of sine waves to extract spectra by minimizing the error. Another method, which was recently popularized by John Ehlers, is Hilbert transform. This method uses a filter with known phase shifts and lags to estimate a single dominant cycle.
Overall, the best method is MEM, although some newer methods have surfaced that demand further study. These will provide interesting subject material for future articles.
The concept of walk-forward testing is similar to using out-of-sample data to validate strategies developed on in-sample data. As such, fundamental changes in how markets trade over time can reduce its usefulness. In walk-forward testing, however, the review period is more evolutionary. For example, optimization may take place over the last 10 years and out-of-sample testing on the 11th year. At the end of the 11th year, the 10-year window is shifted forward and the process repeats.
Classic walk-forward analysis uses simple, traditional parameters such as net profit and drawdown to select input values. We can demonstrate this with a simple triple moving average crossover system that has been modified so that it can be used in electronic markets. The classic triple moving average crossover enters at the next day’s open, but this system enters on a limit price at the prior close, which solves the problem of the overnight open on ICE, for example, not being liquid. "System code" (below) details the rules of the strategy.
This code is available as text on the following page.
We will test this system on cotton, mini natural gas, the euro, copper, the 30-year Treasury and mini crude oil futures, and merge the pit and electronic market by using the pit data until the volume of electronic is greater. Our source is Pinnacle data.
If we optimize our parameters on all the data we selected, 10, 60 and 70 make about $416,000 from Jan. 4, 1991 to today. For our walk-forward test, which only returns out-of-sample results, we will use 1,000 bars for the test window. This technique — which has 1,000 fewer bars because it uses that first block for the initial optimization — makes about $364,000 with an $83,000 drawdown. The table in "Walk-forward results" (below) includes data from Jan. 4, 1991 through Aug. 31, 2010. We assume $100 for slippage and commissions.
We can see that in the training period all have done well. The out-of-sample periods also did well until the Oct. 21, 2004 to Oct. 18, 2005 period. After that, the out-of-sample periods do badly. During this period, we lost $44,000. Out-of-sample and even the in-sample periods made only $94,000. During the same period, our in-sample set of parameters, which made $364,000 after the first 1,000 bars, made $208,834 during this period, while the out-of-sample period lost $44,000.
Clearly, the triple moving average system is lacking in today’s markets. One reason is that the traditional technique uses score-based parameter selection. When human experts select system parameters, they also analyze more sophisticated measures, such as optimization surfaces that rely on two variables at a time vs. a single-variable system metric. A better goal is to seek robust parameter sets that generate similar results from neighboring parameters.
Combining multiple systems and trading different baskets of markets is the best way to maximize risk-adjusted returns. The problem isn’t the concept, it’s the software. Today, packages that handle portfolio-level testing are Trading Blox, Mechanic, TradersStudio and TradeStation, when combined with independently developed add-ins.
When it comes to implementing trade management, sizing and portfolio rebalancing, strategies must run one bar at a time. Money management strategies can affect trades. For example, a system that uses trade history to implement money management strategies will return defective results if trades are exited based on profit or loss targets, particularly if these systems use different entry and exit rules based on the existence (or not) of an open position.
In addition, some system developers optimize and develop systems one market at a time. In doing so, trades are limited and results are more likely to be curve-fitted. Walk-forward analysis increases the number of out-of-sample trades, but some strategies, such as trend-following ones, still don’t have enough data. Developing systems on a portfolio solves some of the problems if you use the same parameters for each market. Combining walk-forward analysis on a portfolio offers better parameter rationing and increases the likelihood that the system will continue to work into the future.
This approach isn’t without controversy. Some argue systems should trade one market at a time because each market has its own characteristics. However, unless a system is developed using specialized analysis, such as intermarket analysis, it should be developed on multiple markets. For example, a stock index system should be developed and tested on Dow, Nasdaq and S&P 500 futures so that it works on all three markets with the same parameters.
Genetic algorithms have been part of the trading lexicon for some time. They have two uses: first, to find optimal parameters and limit search space and, second, to evolve trading rules. Today, many products have genetic algorithm technology fully integrated or available as add-ins.
Another branch of genetic algorithms called genetic programming allows more of an unguided search and also writes the code. In evolving rules with traditional genetic algorithms, we create a chromosome that can have many combinations of elements as possible sets, but we are limited to these elements in different combinations and parameters. Thus, evolving rules can’t create new novel concepts, while genetic programming can. However, that comes with a price because the search is unguided and can come up with nonsensical concepts that test well, but are really the results of a statistical side effect in the data. We need to guard against this when using genetic programming.
The idea of whether a data mining search must be guided is another issue where experts disagree. If you do not guide the search, you can develop new trading logic beyond the realm of your personal experience. However, if you develop logic that you don’t understand, you may be ill equipped to judge the system’s performance for robustness. A guided search combines concepts that are valid and reliable, and allows these known concepts to be organized in new ways.
Neural networks have gone through many births and rebirths over the years, but have never achieved the promises their proponents preached in the early and mid-1990s. One reason was practitioners expected too much. They tried to use neural networks to predict price changes. This doesn’t work. Neural networks are not magic. Valid uses for them are predicting indicators, pattern recognition and replacing system inputs.
If we combine neural networks with advanced walk-forward testing, we can make neural networks adapt by retraining and testing. The main problem with neural networks is they are expensive to develop and maybe three to 10 times more labor-intensive than developing a classic system. Now that the markets are getting harder to trade, we might finally see neural networks have a rebirth that allows them to mature into effective trading tools.
Other technologies such as rule induction, which is producing rules directly from the underlying data, also will get a new look. The same goes for technologies such as rough sets and other rule-induction algorithms. Technologies such as case-based reasoning, which given a set of parameters return a forecast based on similar cases in the database, also will see renewed interest.
Now that we have looked at many issues in trading today’s markets, as well as technologies that can help us trade these markets more profitably and reliably, we are better equipped to put many of those tools to use. In the next installment, we will discuss how to implement some of them to build real trading systems.
Murray A. Ruggiero Jr. is the author of "Cybernetic Trading Strategies" (Wiley). E-mail him at firstname.lastname@example.org.
Go to the next page for system code as text...
Sub ThreeMACrossover(SLen As Integer,MLen As Integer,LLen As Integer)
Dim ShortAve As BarArray
Dim LongAve As BarArray
Dim MedAve As BarArray
ShortAve = Average(Close, SLen,0)
LongAve = Average(Close, LLen,0)
If ShortAve > MedAve And MedAve > LongAve Then
If ShortAve < MedAve And MedAve < LongAve Then