## Predictive market modeling in R Language

#1

Predicting returns one or more bars into the future is a dream of both traders and quants. There are many methods for doing this that approach rocket science; many of these are ensemble methods. One such method is a hybrid model that consists of both the autoregressive integrated moving average (ARIMA) model combined with the generalized autoregressive conditional heteroscedasticity (GARCH) model.

Adding machine learning and an advance modeling method to classic backtesting and trading platforms will be a hot technology during the next five years, as will using the libraries of open programming language R and Python combined with backtesting programs as the most cost efficient way to do this. R has many machine learning and statistical libraries, while Python is better at text processing (NLP) to create sentiment indicators from news sources or twitter.

We will be using R to develop our hybrid ARIMA/GARCH model after we have explained the theory. First we need to understand some basics of stock market data. It normally has an upward bias that must be removed to use most prediction and modeling methods. Next, there are often seasonal effects. Finally, you have heteroscedasticity; small percentage changes can become large absolute changes.

ARIMA models are time series regression models. Regression models have an X variable, which is an independent variable and an Y variable, which is a dependent variable. We regress Y on X and create a linear model. ARIMA models are auto regressive, this means that we are regressing today’s prices on past prices to predict the future. The assumption when we do this is that the errors are white noise, random with homoscedastic volatility, relative volatility is constant but absolute changes get larger as prices increase.

On time series data you can regress today’s value on yesterday’s value, which is called auto regression.

**Time Series Regression Models **

*• Independent normal with common variance*

*• Is basic building block of time series*

**AutoRegression: **In auto regression what happened today is the dependent variable and what happened yesterday is the independent variable. Let’s see how the error is calculated:

If we assume the error is white noise, one way to improve the estimation of the noise is to use moving average of noise lagged. Here we use noise today + some fraction of lagged noises. This creates a moving average of noise. If we assume white noise, this creates an estimate of future noise. Putting these two models together creates the average autoregressive–moving-average (ARMA) model. ARMA, and ARIMA are the similar except ARIMA can do differencing as part of the calculation. So ARMA is calculated as follows:

You can think of this as an ARMA model with correlated errors.

In order to use an ARIMA model your data must be what is called “stationary.” What does that mean? It means that first the mean is constant — normally 0 — and this is often done by taking a first difference. The other thing that stationary time series have in common is that the correlation of the current series value with lagged values is constant. The time series will show repeating patterns over time. If the mean is constant, simple averaging will allow us to estimate a correlation using the mean. Also, if the correlation structure is constant over different lags we can estimate the lag 1 correlation by using all data points one point apart: X1-X2,X2-X3,X3-X4 etc. We can also use X1-X3,X2-X4 etc. to estimate the Lag2 correlation.

If we look at the S&P 500 for example we have a long-term uptrend and heteroscedastic volatility. In cases like this we want to use the log of the values before we take a difference to make the data stationary. If a series is a random walk without a trend then simple differencing will work. Simple differencing will also work for trend stationary.

**Why are ARIMA models valid?**

Herman Wold’s decomposition shows that a stationary time series can be written as a linear combination of white noise and that any ARIMA model is a linear combination of white noise. This means that they are suitable for modeling stationary time series.

ARIMA models are expressed by the number of terms of each type: (p, d, q); p is number of Autoregressive terms; d is the number of time difference (this is often 0) if we do this prior to passing to the algorithms so we can make sure the series is stationary first; and q is the number of moving average terms.

**ACF and PACF plot analysis **

The sample autocorrelation function (ACF) for a series gives correlations between the series xt and lagged values of the series for lags of 1, 2, 3, and so on. The lagged values can be written as xt-1, xt-2, xt-3, and so on. The ACF gives correlations between xtand xt-1, xt and xt-2, and so on.

The ACF can be used to identify the possible structure of ARIMA time series models.

Partial autocorrelation function (PACF) is essentially the autocorrelation of a signal with itself at different points in time, with linear dependency with that signal at shorter lags removed, as a function of lag between points of time. The table below shows us how these factors relate to ARIMA type models.

Pure auto regression (AR) ACF trails off at lag – and PACF cuts off at lag p. A pure moving average (MA) is the opposite, for the ARIMA model both will tail off.

An ARMA model contains parts for an AR and MA mode: ARMA (p,q). An ARIMA model is extended as it includes the extra part for differencing. If a dataset exhibits long-term variation (i.e. trend-cycle components), the ACF graph will show a straight line edge and will not quickly drop to zero. In this case it is useful to difference the data. This simply takes each data point and calculates the change from the previous data point. The ARIMA model is ARIMA (p, d, q) where p is the order of the AR part, d is the number of times differencing has been carried out and q is the order of the MA part. The extension allows the model to deal with long-term variation better so improves the usefulness of this modeling technique.

**Finding the optimal parameters for ARIMA**

Akaike information criterion (AIC) measures the relative quality of a statistical models on a given data set. It’s a way of comparing models for a given set of data and estimating the quality of each model relative to each of the other models. Hence, AIC provides a means for model selection.

AIC is based on information theory, it estimates the information lost when a given model is used to represent the process that generates the data. In doing so, it deals with the trade-off between the goodness of the fit of the model and model complexity.

AIC give us an absolute test of a model’s quality for example testing against a null hypothesis. If all the candidate models fit poorly, AIC will not give any warning of that.

Akaike’s Information Criterion is usually calculated with software. The basic formula is defined as:

*AIC = -2(log-likelihood) + 2K*

Where:

*• K is the number of model parameters (the number of variables in the model plus the intercept).*

*• Log-likelihood is a measure of model fit. The higher the number, the better the fit. This is usually obtained from **statistical** output. *

For small sample sizes (n/K < ≈ 40), use the second-order AIC:

*AICc = -2(log-likelihood) + 2K + (2K(K+1)/(n-K-1))Where:*

*• n = sample size,*

*• K= number of model parameters,*

*• Log-likelihood is a measure ofmodel fit. *

The log-likelihood is produced by many of the analysis functions in R, such as Arima, Garch functions in Rugarch library and other regression or autoregressive modeling methods. How this is calculated is beyond the scope of this article.

We calculate the order of the best ARIMA model by finding the lowest AIC; this takes into account both the fit and complexity of the model.

**Mean reversion modeling of price and volatility **

The ARMA model aims to correct for the autocorrelation, which is common with financial time series. However, the original ARMA mean model assumes that the variances are constant over time. While, in fact, it has been well-documented that variances of financial time series (e.g. the stock returns) are conditional over time. The current level of variance is conditional upon the level of previous variances, that is, the time series of variance itself is auto-correlated. This raised questions regarding the validity of the original ARMA, which ignores the existence of conditional variance. The autoregressive conditional heteroscedasticity (ARCH) model of Engle 1982 has been seen as a revolution in modelling and forecasting volatility. It was further generalized by Bollerslev 1986 as the GARCH Model. The GARCH type models assume that volatility changes over time in an autoregressive manner.

GARCH means generalize autoregressive conditional heteroscedastic model Let’s discuss the simple GARCH 1,1 model.

We can model the variance squared as follows (equation 4):

This equation is simply: Gamma times long term Variance +alpha times square return lagged +beta times lagged variance squared, this equals the current variance squared.

Now, if we rearrange the equation we can use it to predict volatility (equation 5).

With a little rearranging, we can predict variance squared for the next period.

** ARMA-GARCH model**

The GARCH Model can not only predict volatility but also returns. We do that by going back to the ARIMA equation and making some substitutions.

Let’s assume our variance model is the standard GARCH 1,1 (see equation 4). Our ARIMA-GARCH Hybrid model will calculate the forecasted returns by using the ARIMA equation and replacing the noise component with the variance from the GARCH model. This is oversimplified but does allow you to understand how the predicted returns are calculated.

The ARMA model and ARMA-GARCH model can be used to forecast markets. Out-of-sample forecasting performance is evaluated to compare the forecast ability of the two models. From a statistical point of view, the ARMA-GARCH model outperform with all of the three commonly used statistical measures. Traditional engineering type of models aim to minimize statistical errors, however, the model with minimum forecasting errors in statistical term does not necessarily guarantee maximized trading profits, which is often deemed as the ultimate objective of financial application. The best way to evaluate alternative financial model is therefore to evaluate their trading performance by means of trading simulation.

We found that both ARMA and ARMA-GARCH models were able to forecast the future movements of the market, which yields significant risk adjusted returns compared to the overall market during the out-of-sample period. In addition, although the ARMA-GARCH model is better than the ARMA model theoretically and statistically, the latter outperforms the former with significantly higher trading measures.

**Developing our Simple Return Prediction Model**

Our model will use the ARIMA-GARCH mode. This takes a mean forecast produced by an ARCH/ARIMA process and combines it with a GARCH process. There are several libraries in R to do this, but only one is strong enough to be used as a research tool. This is the Rugarch library written by Alexios Ghalanos. He has taken this analysis from interesting academic research to a tool which can be used for both risk and trade analysis as well as forecasting future returns. His research in GARCH has enabled him to even analyze how a given model will respond to shocks caused by news events. It supports most of the newer GARCH models and submodels discussed in the research. He even supports advance topics like external regressors both for the variance and the mean model; garch only supports it on the variance model.

Another very useful library is quantmod. The quantmod package for R is designed to assist the quantitative trader in the development, testing, and deployment of statistically based trading models.

“Quantmod is an R package that provides a framework for quantitative financial modeling and trading. It provides a rapid prototyping environment that makes modeling easier by removing the repetitive workflow issues surrounding data management and visualization,” says authors Jeffrey A. Ryan and Joshua M. Ulrich. The core of this library is built on top of the XTS R package, also written by Ryan and Ulrich. The XTS package provides several functionalities to work on time-indexed data, including: time-based subsetting, aggregating, merging and aligning data.