It’s a common question, and understandably so: “Neural networks and artificial intelligence have been hot before — in the early 1990s, then around 2000, and still now — and each time they have failed to reach their lofty promises, fading back into obscurity. Why is this time different?”
You shouldn’t be surprised that trading isn’t the only field that has encountered this question (see “Hedging the robot apocalypse,” Modern Trader, August 2015). Another, which this author happens to be familiar with, is meteorology. Today, meteorologist use advanced computer models requiring super computers to analyze weather patterns. These models use massive amounts of data. While weather forecasting models are much better understood than trading systems today, both have followed similar paths of development.
That said, forecasting diverged, successfully, from this shared development path in the late 1980s. Trading systems stagnated. Comparatively speaking, we still use very simple one-faceted models. This is where weather models were years ago.
Until the end of the 19th century, weather prediction was subjective and based on observable correlated patterns. There was little understanding of the “why” behind the relationships. These included cloud formations, sky color variations, and visible moon conditions, wind direction, temperature, humidity and barometric pressure.
In the early 20th century, meteorologists began to explore causal links in a scientific fashion. In 1901, Cleveland Abbe, founder of the U.S. Weather Bureau, proposed that the atmosphere might be governed by thermodynamics and hydrodynamics. In 1904, Vjihelm Bjerknes derived a two-step procedure for model-based forecasting, including the notion that many physical equations, such as the ideal gas law, could be used to estimate the state of atmosphere through numerical methods.
In the early 1920s, the first attempts at modeling the weather failed terribly — by two orders of magnitude for surface pressure. The first successful numerical prediction did not arrive until the 1950s, using a digital computer. A team of American and Norwegian researchers computed the geopotential height of the atmosphere’s 500 millibars pressure surface. This simplification greatly reduced demands on computer time and memory. The first calculations were heralded as an incredible scientific advance and took nearly 24 hours to produce.
During the next 20 years, meteorologists worked on improving the model and simplifying calculations. In 1959, the first successful set of primitive equations was developed. In 1966, West Germany and the United States began producing operational forecasts. The United Kingdom was next in 1972, then Australia in 1977.
Based on the foundation of scientific understanding, more variables were added. These included solar radiation, moisture, latent heat and feedback from rain on convection (see “More than hot,” below). Efforts to involve additional variables, such as sea surface temperature, were hindered due to the lack of processing power. All the major world powers did not have a reasonable model of weather until the early 1970s. Even in the 1980s, the models did not have ocean variables.