At the turn of the last century a scandal occurred that rocked the investment world. Investment bank analysts were discovered giving strong recommendations for certain initial public offerings and stocks, while at the same time providing negative reports on these same stocks to preferred customers.
The banks gained listing business by promoting stocks, but many of these were stocks they knew were losers and they shared this only with select customers. The scandal led to new regulations regarding investment analysts and Regulation Fair Disclosure (Reg. FD).
On April 28, 2003, National Association of Securities Dealers (NASD), the U.S. Securities and Exchange Commission (SEC), the New York Stock Exchange (NYSE), the North American Securities Administrators Association (NASAA) and the New York State Attorney General announced the final terms of the Global Settlement of Conflicts of Interest Between Research and Investment Banking. Known as “The Global Settlement,” it dealt with conflicts of interest between investment banking and securities research at brokerage firms.
As a result of the investigation, 10 of the nation’s top investment firms agreed to pay a $1.4 billion settlement: $387.5 million of it in investor restitution and $487.5 million in penalties.
In addition to the fines, according to the SEC, “The firms were required to undertake dramatic reforms to their future practices, including separating their research and investment banking departments.”
“The scandals were all about the companies going public getting a massive buy rating, and behind the scenes they were telling institutional clients, ‘sell, sell, sell’ because the companies really weren’t worth it,” says Estimize CEO Leigh Drogen. “But they wanted to do the investment banking work and get paid all the fees; And that caused problems and that is when the global settlement happened.”
Regulations followed that defined how firms could provide analysis that attempted to remove the conflicts. But what actually resulted was a weak form of analysis that consisted of “buy/sell/ hold” ratings. Of course, the greatest conflict — the long-only bias remained. Since the sell side still earned money through listings, there was still a bias towards buy recommendations. The value of sell-side analysts was not great, which led professional investors to seek a better way to gauge the thoughts of professional sell-side analysts, who by most measures had the ability to add value.
“Historically, the sell side has produced the buy/sell/hold data set, and that data set unfortunately was random,” Drogen says. “While the participants do have skill — or persistence of accuracy or inaccuracy over time — the problem is that the system itself was not a good system; the question was not a good question.”
It was not a good question because the buy recommendation offers little guidance. Buy when? How much? Up to what level? What does “hold” mean? Drogen notes that some investors assume that buy means “overweight” but asks, “How would that work given that 80% of total recommendations are buys? How do you overweight 80% of a portfolio?” And, of course, there are precious few sell recommendations.
“They don’t get paid for being accurate as analysts; they get paid for the corporate access from the investment banking,” Drogen says.
An idea formed
Into this morass came a simple idea — generated as best one can tell simultaneously from numerous quantitative trading shops — that would provide an edge to the 16 or so buy-side firms using it.
The idea was to scrap the usual buy/sell/hold ratings and ask the sell-side analysts to simply rate a basket of stocks within a sector.
“These guys know that both the sell side and the broader financial community have good information and skill regarding the ability to understand which stock will outperform others,” says Drogen. “At the least these funds want to ascertain what the sentiment of the market is saying at any time regarding what people believe. When you have that kind of structured data you can look for patterns in it. These guys are very good at looking for patterns in data.”
One of the firms that was doing this was Two Sigma. “There are a lot of firms that were running various forms of surveys of the sell side. It was a common practice of the industry,” says Omer Cedar, co-founder and CEO of Omega Point Research and a former VP at Two Sigma. “From a program perspective, we had over 120 banks and 25 countries doing it, so it was clearly known by a lot of people. The mechanism was a systematic survey that was asking the sell-side analysts a number of questions and one of the questions involved a Forcerank mechanism that was applied in order to better understand the relative conviction.”
“These quantitative funds said, ‘We know that the analysts are good at picking winners and losers when they are allowed to be honest, so why don’t we ask them a different set of questions,’” Drogen says. “These quant funds looked at all this and said, “How do we take advantage of the fact that these guys are pretty good?” They wanted to isolate the ability for these analysts, who normally operate within a specific sector or industry group, to give them not an absolute rating, but a relative reading of one stock to the next.
“If I ask you, ‘How is Groupon going to perform next week?’ You would [say], ‘I don’t know.’ But if I ask you, ‘Who will perform better next week: Groupon or Amazon?’ I’ll bet you have an opinion on that and more than 50% of the time you will be right. When I ask you, ‘Groupon or Amazon?’ it immediately takes out the need to be correct regarding direction. It takes out the need to market time, position size; it takes out the need to do all those things except which will perform better next week. People are pretty good at that.”
Cedar explains that people tend to have a hard time rating things 1 to 10, but when you force them to rank something then they end up with a much more granular understanding.
“You have to know how to structure the survey in a way that properly emulates reading a research report, otherwise it won’t produce a very good match for what you are trying to find,” Cedar says. “Ultimately the purpose of a lot of these systems was to do that — essentially to say, ‘We are not going to read a 130-page research report, please help us understand all the nuances in your research report in a much more structured survey.’”
He says the program was designed around the kind of information you would get if you called up your analyst on the phone. Traders — retail or professional — don’t simply look at a recommendation; they will call their broker and say, “Hey, I heard that XYZ stock is a buy, what can you tell me about it?”
Cedar adds: “Most people will call the analysts and ask different questions to understand more dimensions. That is essentially what the survey was designed to do; to understand more dimensions then simply boil it down to one number, which is the underlying rating.” “It is intended to systematically mimic what fundamental buy-side analysts do: get on the phone and call a sell-side analyst and ask a bunch of questions. Because we are a quant and others are quants, they don’t necessarily have the buy-side analysts to do this work, so it was essentially an automated process.”
In fact, Cedar points out that this was an argument BlackRock made to regulators after the practice of sell- side analyst rankings came under scrutiny. Many hedge funds had voluntarily stopped collecting the data when there was a question over whether it was a violation of Reg. FD; but BlackRock persisted.
Hammer comes down
By 2012, the practice of surveying sell-side analysts was common and being reported on in the financial press.
The New York Times reported on how many hedge funds and institutional trading desks (including Barclays Global Investors, which would later be bought out by BlackRock) were having success in plumbing for sell side information and expanding into rankings. Though the story highlighted how some internal documents indicated that they were receiving non-public information, Barclays included the following disclaimer in its outreach to analysts: “We would like to highlight the fact that we are only interested in public information. Please only share with us information that you publish through your research notes, investor calls and/or disclose in client meetings.”
This seemed like a simple work around, but the New York Times story appeared to pique the interest of New York Attorney General Eric Schneiderman, whose office initiated an investigation into the practices — which Schneiderman dubbed insider trading 2.0 — around the same time the story appeared. Roughly 18 months later the New York AG’s office announced an agreement with BlackRock in which the hedge funds agreed to suspend its global analyst survey program and pay $400,000 to cover the cost of the investigation. The agreement states that there is no acknowledgment of guilt and the payment is not a fine or settlement.
Schneiderman alleged that BlackRock’s activity violated the Martin Act, a relatively obscure state law that gives the AG greater power in investigating financial fraud within the securities industry. It covers “all deceitful practices contrary to the plain rules of common honesty,” according to a 1926 appellate court ruling.
“Our agreement with BlackRock to end its global analyst survey program and cooperate with my office’s Wall Street-wide investigation into the early release of analyst sentiment is a major step forward in ensuring fairness to our financial markets and ensuring a level playing field for all investors,” noted Schneiderman in the announcement. “The concept that there should be one set of rules for everyone is critical to protecting the integrity of our markets, which is why my office will continue to take action against those who provide unfair advantages to elite traders at the expense of the rest of us.”
Schneiderman’s office months earlier had reached an agreement with Thomson Reuters, which agreed to end its practice of selling early access to financial survey information to high-frequency traders. In announcing the BlackRock agreement, Schneiderman said that the BlackRock analyst survey program could “allow them to front run analyst recommendations and a greater chance of exposure of non-public analyst sentiment.”
Shortly after the agreement between the New York AG’s office and BlackRock, Two Sigma announced that it was suspending its survey program. “The BlackRock settlement may represent a change in the law regarding equity analysts’ communication with their clients,” a spokesman for Two Sigma said in a statement at the time, according to the Wall Street Journal. “A compliance-minded firm carefully studies what its regulators say and we are doing exactly that.”
Can’t kill a good idea
Regulatory nuances aside, the notion of collecting data of traders and market expert opinions proved valuable, so the industry looked for ways to continue to collect this type of data that would pass regulatory muster. And fortunately for them, there was a firm that was already doing this. Fintech firm Estimize was building a reputation for offering solid market analysis through its crowd sourcing methodology that was more accurate than the top analysts on the street (see “The view from the crowd,” page 28).
Drogen says that a couple of weeks after many of the hedge funds ended their survey programs; he met with one and began to talk to him about what he was doing with Estimize. The hedge fund manager explained what was going on regarding the sell-side survey and Drogen saw that they were looking for the type of crowd source data that Estimize provides.
The manager told Drogen, “We would like you to build a public facing forcerank system, not to collect data from the sell-side analysts but a crowd-sourced community of buy-side, independent and non-professional people,” Drogen says. “The reason that that
is legal is because sell-side analysts are bound by Reg FD, but everybody else is not, so we can collect that data from anybody except for the sell side and we can distribute it however we want to whomever we want.”
It was a perfect match because the hedge funds already knew the value of collecting this type of information, and it melded right into the Estimize value proposition.
At least 16 groups were doing this independently of each other and paying the sell side between $10 million and $12 million annually, according to Drogen. “It is so simple and simple things end up being the best thing when it comes to quantitative trading,” Drogen says. “The more simple you can make something, the more grounded in an objective reality you can make something, the fewer weird things can happen to the strategy.”
Estimize branded the project Forcerank and began collecting data. They promoted it as a contest in order to gain greater participation, but that ran afoul of the SEC (see “Forcerank 2.0” page 27).
Forcerank only has six months of data but the early results are very positive and the buy side wants the data. “It is in their opinion showing the same signals as what the sell-side data showed,” Drogen says.
Forcerank releases the weekly data 9:30 a.m. ET on Monday morning to its customers who pay hundreds of thousands of dollars a year for data. The people who contribute to each contest receive the data for that set of stocks for free at 11:30 a.m. Monday. Everybody else will receive the data at 4 p.m. on Monday for free. “Clients get all of the data. They get to touch it, they get it in a file that they can use and put it into a systematic trading strategy; everybody else just gets to see it visually,” he says.
Just as with all of the Estimize crowdsource data, anybody can participate.
Forcerank selects the contests by market sector. Each contest has 10 stocks in it. The contests are organized roughly by industry group and market capitalization. “We have the large cap semi-conductor contest that has Nvidia (NVDA), Intel (INTC) and a bunch of other names. We also have four ETF contests: We have a U.S. sector ETF contest, a global indices ETF contest, a commodity EFT contest and a forex ETF contest,” Drogen says (see “Trading Forcerank’s data,” left).
There are currently 16 contests and they are adding new ones each week.
Cedar understands the value of this type of data and is using it for customers of Omega Point. “The overarching thing here is the wisdom of crowds and crowd sourcing, and what Estimize does is a very powerful mathematical truism,” Cedar says. “You actually can, by being able to aggregate uncorrelated opinions, find some average that is better than the best opinion you could get on any given day. That is the basic premise of most of these crowd sourcing systems.
“The real purpose of the system is twofold: The first one is you want to get as many uncorrelated opinions as possible so that you can extract a broad survey of the population, and the other thing you can do is try and make sure that you are getting a consistent frequent approach,” he adds.
Using the data
“What you could also do when you have structured expectations, is to look for the people who are consistently good and consistently bad and find those super forecasters, super analysts who are very good at this and take advantage of that information,” Drogen says.
Estimize set up the contests initially to award the most success rankings. And while users of the data may choose to follow the most accurate participants or the crowd’s recommendations, quantitative traders will find many more ways to break down the data.
“One of the key insights you discover is crowding effect. You need to understand that there is crowding going on between the opinions,” Cedar says. “If all the opinions are pointing in the same direction and there is no variability then you actually end up with a poor signal. That is one of the key nuances; it is more important to have disagreement in what you are looking at to couch the fact that there is uncertainty. If everybody believes that Apple is totally going to beat earnings in a particular quarter then you don’t have that variability. If Apple misses, it is a major crowded trade and you don’t want to be in that position.”
He also notes that while some may choose to follow the most accurate predictors, others may see that trend as creating a reversion trade. “That is one of the other key things; the people who have the hot hand today are less likely to have the hot hand tomorrow. You may be better off looking at people who have had the poor hand.”
Crowd source investment firm Quantopian has already designed a market neutral strategy based on Forcerank data (see “Trading Forcerank’s data,” page 24). “After a bit of research and backtesting, what I found was that using the crowdsourced stock rankings as an inverse predictor of future performance provided alpha potential within the context of a market neutral strategy,” says Quantopian product manager Seong Lee. “I found that going short on the highest ranked stocks and going long on the lowest ranked stocks provided the best performance in a market neutral strategy.”
Have it your way
That is the beauty of the Estimize model. It produces data that can be analyzed on a quantitative basis just as a trader would look at market data.
“If you have a short-term time horizon then you are going to look for signals and patterns that will help you identify changes in the stock price in the next few weeks,” Cedar says. “If you got a more long-term look at the company then you are going to look for fundamentally oriented shifts in the stock and holder base. Different people with different horizons will use it that way, and different strategies will look at different things. If you have a credit strategy and you are looking at companies with high risk of default, you might want to see changes off the bottom [because they] are a lot more dramatic than variations in the middle.”
While the Forcerank data is a treasure trove for quantitative traders, it also can be helpful to fundamental traders who find themselves being outperformed by the quants. “We saw a gap in the market with the usage of quantitative tools to analyze data sets like the one Estimize provides to the fundamentally focused buy side,” Cedar says. “There is a set of quant traders in the market but the majority of global portfolios are still being managed on the basis of fundamental discretionary oriented techniques, [which have underperformed recently].”
The idea is not to turn fundamental discretionary traders into quants, but just as discretionary commodity traders need to know where the long-only systematic funds are at so that they don’t get run over, the same holds true on the equity side. Omega Point sells the ability to use this data as fundamental tools, making Forcerank data digestible to fundamental analysts. Helping them understand the nuanced signal out of the Forcerank system or helping them understand how that data could represent risk or opportunity in their portfolio, says Cedar.
“One of the automated benefits here is that you are able to capture that kind of information that somebody else has to read and go through at painstaking length, in a much more systematic fashion,” Cedar says.
The crowd source data is not some panacea that will create profitable traders, but the number of systems and strategies that can be created by accessing the Forcerank data is only limited by the imagination of those funds accessing the data.
It may be a game changer, again.