Arima-Garch out of the Lab, into Trading

January 8, 2018 01:37 PM
Advanced Technique

Arima-Garch research takes time & money 

The good news about Arima-Garch hybrid models is they do a good job of creating short-term return forecasts. Models are simpler to create and preprocessing is simple. It’s not like trying to build neural network models. Many models will show profitable performance over long time frames, even though they can go through years of flat to slightly profitable performance between periods of outperformance. Other models outperform for some regime and perform badly in others. This means post-processing these models are an important area of research. These models also can be used as parts of a larger trading system and offer a predictive edge. The problem is how computer-intensive developing and testing these models are. A simple model for S&P 500 related markets is in the public domain because the examples of using this technology are mostly for the S&P 500.

The problem is the size of the search space for these Arima-Garch hybrid models, and each test takes time. For example each test takes 30 to 60 CPU minutes to run 10 years of daily data. We can run these in parallel, so on an 8-core machine, you can run six at a time. If we have 1,000 tests and each one takes an average of 45 minutes, that comes to 5.21 days to run this test assuming six cores. If we are trading a few markets end of day, it’s not bad, but let’s say we want to test 500 stocks. Then it becomes a big problem. One of the big areas of research is finding search ranges on large portfolios of stocks, or finding different Arima-Garch hybrids that work in different regimes and switching between them. The need for computer power is why institutional traders have an edge. You need someone who doing this research to help you or a big budget to do it yourself.

One of the most important tools for this type of analysis is using cloud-based cluster computers on Amazon Web Services (AWS). This is very important. It would have been easier to build this on Azure but the cost of running experiments like this would have been five to 10 times as much. 

If we run that 1,000-test example 5.21 days using six cores and then create a 60 virtual CPU cluster, we can run this in about 13 hours. The cost would be expensive over time, approximately $200 to $250 on AWS CPU (25¢ per CPU hour). There are advance AWS tricks that can cut this to $70, but this requires advance knowledge of AWS.  

Test window sizes from 100 to 1,000 steps of 100. Test five different distributions, three different types of Garch. We want to test with no regressor and also four different intermarket regressors. Finally let’s use Garch order p,q , p from 0 to 2 and q from 1-2. This brings us 4,500 tests per market in order to do a reasonable search of the space for each market. That is about 4,000 CPU hours per market or about 5.5 months on a single CPU. That comes to a little over one month on our 6-core example. It’s also about $1,000 on AWS.  This is a reasonable search space on one market where we don’t know anything about the search space. The problem is if we need to run the S&P 500, that would be $500,000 to run all the stocks, so 7,500 stocks would cost more than $2 million. We are saving all of the results on each bar so we can try different predictions ahead and also test different probability levels without rerunning. 

Page 2 of 3
About the Author

Murray A. Ruggiero Jr. is the author of "Cybernetic Trading Strategies" (Wiley). E-mail him at ruggieroassoc@aol.com.