Too good to be true?
Many traditional technicians cringe when they hear of such multi-indicator, optimized approaches. Curve-fitting concerns instantly develop. Admittedly, a major drawback to this approach is that each time you add an additional variable, you need more data to find good fits. Anyone who is a student of statistical theory is aware that you lose a degree of freedom with each additional variable added.
This brings into question the level of confidence you have in the results. If you are modeling a time period that has five different market cycles, there exists somewhere a set of five different variables that can be modeled that result in perfect performance. This is the basis for the KISS (Keep It Simple, Stupid) approach to modeling. In theory, you would much rather have a one-variable model with an average 10% annual return than a five-variable model with a 12% return. The odds of the former behaving in the future like it did in the past are much higher than that of the latter.
One of the ways to deal with the curve-fitting concerns is to test the system on an out-of-sample data set. That is, optimize the indicator on one set of data, say 1970-2000, and then test it on another set of data, say 2000-2010. Even this can be inherently faulty, as the market averaged 9% a year over the 1970-2000 period and was flat during the first decade of the 21st Century, giving bear market signals a better opportunity for success over the out-of-sample period.
At minimum, even without out-of-sample studies, the system should be tested over several different market cycles. Use common sense in your modeling. For example, several systems were readjusted after the 1987 crash to ensure the trading thresholds were 0.001 above or below where they needed to be on the right side of a one-day 22% move. This is one reason you should focus on median results rather than average, so that a single crash data point doesn't drive the system.