From the February 2014 issue of Futures Magazine • Subscribe!

Building a better profit-catching mouse trap

Moving past megahertz

You can only push clock speeds so far. Desktop processors topped the 1 GHz mark in 2000, 2 GHz in 2001, and 3 GHz in 2002. After that, processor speeds basically flat-lined. Power and heat concerns meant that Intel and AMD couldn’t keep ramping up clock speeds without running into design obstacles.

Having come to the end of the possible MHz increases, both companies frantically searched for other methods to increase processing power while improving efficiency. The most significant change was increasing the number of processing cores on the CPU. The multi-core era began in 2005 with Intel’s Pentium D 800 dual-core chips. Soon after, AMD followed suit with the Athlon 64 X2 line of processors and dominated the initial round of head-to-head benchmarks.

In January 2006, Intel released the first dual-core mobile chip called the Core Duo. This processor advanced laptop performance dramatically. Following its success, the Core 2 Duo line was released for both desktops and laptops and was arguably the most successful launch in the company’s history. AMD is still reeling.

Now, most computers ship with at least four (Intel i5) or six (Intel Core i7 Extreme Edition) cores. However, some server processors can contain 10 or more cores, such as the Intel Xeon processors. The implementation of the multiple cores can have a wide impact on performance, but it’s a bit misleading to say that “multiple cores means a faster computer.” After all, if an operating system and all its applications only run on one core, then it doesn’t matter how many cores you have.

Software needs to be written to run in parallel, simultaneously on multiple cores. This effect is described by Amdahl’s law. In the best case, so-called “embarrassingly parallel” problems may achieve speed increases equal to the number of cores, or even more if the problem is split up enough to fit within each core’s cache (to avoid slow system memory). Most applications, however, are not this optimized.

Developers must invest an incredible amount of time, effort and thought to re-factor their software to take advantage of this parallelization. This is what must happen for trading analysis to truly benefit from today’s new technology.

The good news is the process has begun. Major software developers are currently addressing these, and many other, challenges. For the next generation of trading tools to take advantage of these new technologies, they will have to be built with these technologies in mind. Further, as cloud computing gives us access to even more technological horsepower, trading software firms will have to work even harder to keep trading on the cutting edge. In the next installment, we’ll examine exactly how that is happening and demonstrate some of the exciting analysis advancements that will be possible.

<< Page 5 of 5
About the Author
Murray A. Ruggiero Jr.

Murray A. Ruggiero Jr. is the author of "Cybernetic Trading Strategies" (Wiley). E-mail him at ruggieroassoc@aol.com.

Comments
comments powered by Disqus