lunes, 19 de noviembre de 2012

Human After All: Why Nate Silver’s Math Revolution Won’t Kill The Pundits

Editor's note: Matt Baker is an engineer at Wealthfront, a software-based financial advisor that is democratizing access to high quality financial advice. Follow him on Twitter.

In elections and finance, mathematical models take emotion out of the equation, but does the human element really disappear? Some people are using Nate Silver's "triumph " in the presidential election to draw the conclusion that we're witnessing the death of political pundits. People sometimes make the same kind of argument in investing, as Nick Shalek did in Thankfully, Software Is Eating the Personal Investing World.

I believe they are drawing a false dichotomy. Silver's use of mathematical models allowed him to call the election with an extraordinary degree of accuracy, and Shalek is right that algorithms can do some things better than human investors. What doesn't necessarily follow is the idea that the human element is dead.

In something more akin to evolution than revolution, Nate Silver represents the next generation of pundits, just as software represents the evolution of financial advice.

Algorithms are impressive tools that can offer information and insight in ways never before possible. But we cannot forget that these tools are still authored by humans. Our own assumptions (and, if we're not careful, biases) inform the models we create. The human element hasn't disappeared; we are just using better tools.

What Does The Research Say?

There's an impressive amount of evidence demonstrating how poorly experts make forecasts. Researcher Philip Tetlock analyzed 82,361 forecasts made by 284 pundits in his book Expert Political Judgment: How good is it? How can we know?

What was the conclusion of his study?

"When we pit experts against minimalist performance benchmarks — dilettantes, dart-throwing chimps, and assorted extrapolation algorithms — we find few signs that expertise translates into greater ability to make either "well-calibrated" or "discriminating" forecasts."

In short, he found the predictions of experts to be little better than random. What's more, Tetlock found a distinct correlation between the calibration of a forecaster and the media attention they receive. The experts the media most often consulted were also the least accurate.

In the 1980s and 90s, The Wall Street Journal, inspired by Burton Malkiel's book A Random Walk Down Wall Street, held a series of contests that pitted stock analysts against a dartboard (Dr. Malkiel just joined Wealthfront as our chief investment officer). Professor Bing Liang at the University of Massachusetts analyzed years of data from the contests and eventually concluded "the performance of the pros' stocks is indistinguishable from that of the dartboard stocks for 90 percent of the contests." Once again, the value of experts proved dubious.

The truth is, the experts we rely on are just as vulnerable to bias and emotional influence as you or I. Jim Cramer's popular investment show, Mad Money, offers definitive "buy" and "sell" advice to viewers as he attempts to forecast the stock market the way political pundits forecast elections. Despite his claims otherwise, Barrons found that, if anything, Cramer was underperforming the S&P.

Cramer's picks, the magazine found, were often stocks that were trending upward or downward before the show. Cramer was committing a cardinal sin by jumping on the bandwagon of rising stars and bailing on poor performers, ultimately causing people to sell low and buy high.

Why Do Experts Screw Up?

Research that the SEC commissioned from the Library of Congress found that nearly all the harmful investing decisions people make boil down to emotional bias. We buy when stocks do well, we sell in a panic when they fall, we invest in what we know instead of diversifying — the list goes on. Everyone, "experts" like Cramer and do-it-yourselfers in the general public, can fall victim to these factors.

Proponents of algorithmic tools in investing, including Nick Shalek, argue that financial models grounded in rigorous mathematics aren't tempted by the exciting upward trend of a stock, or fearful of its drop in value. Math can help eliminate the emotions and bias that make us flawed investors and experts. Leave your panic behind, and let the algorithms do it for you.

But the assumptions behind the algorithms need to be right – and those, too, can be affected by emotion.

The Good, The Bad, And The Biased

Nate Silver's 538 model wasn't the only electoral forecasting model this election season. Votamatic.org and the Princeton Election Consortium both correctly predicted the outcome of the race. Arguably, the Princeton model fared even better than Silver's, nailing all 50 states and the popular vote.

But there were duds, too. The right heaped praise upon the "Unskewed Polls" model created by Dean Chambers. Chambers' approach was to re-weight polling data to account for what he believed was a bias toward a higher number of democratic samples in the polls, citing the GOP-favoring party breakdown from Rasmussen. Chambers believed the participating electorate would favor the GOP, not the Democratic Party. In the end Democrats' participation did outstrip the Republicans', as the original polls indicated. Romney lost, and Chambers was off by 74 electoral votes.

In an interview with Business Insider, Chambers put it simply: "I think it was much more in the Democratic direction than most people predicted, but those assumptions — my assumptions — were wrong."

Like any pundit, Chambers was influenced by his own bias. Founded on a fundamentally flawed assumption, all the math in the world wouldn't have fixed his model. (Chambers wasn't the only one. A model from University of Colorado Boulder forecast a Romney win of 330 electoral votes.) Though his story has been lost in the flood of articles fawning over Silver, it tells a cautionary tale. Math might not be fallible, but the people who do it are.

Experts Are Dead, Long Live Experts

Today, Tetlock is working on a more sophisticated version of the research he covered in Expert Political Judgment: How good is it? How can we know? This time it will include testing experts with models, like Silver, as well as just the model by themselves. Tetlock says the future is "Kasparov plus Deep Blue, not Kasparov vs. Deep Blue."

While mathematical models may be informed by data, they are authored by humans. A model must be vetted and periodically reconsidered for it to withstand the buffeting of unexpected events and faulty assumptions. Nate Silver's mysterious secret sauce is still an unknown, and while he shared many features of his model, it was ultimately inaccessible to peer review. Without transparency, there's even more danger the creator of statistical models could fall prey to the same faults that Tetlock found in pundits.

Models are tools. They far exceed our own ability to condense and process the multitude of data available in areas like politics and finance. They can inform us, and even forecast for us, but they are only as strong as the rigor employed by their authors. Nov. 6, 2012, was not the triumph of data over pundits; it was a watershed event in the evolution of our predictions. We're witnessing a revolution in the tools and accuracy of experts, but our forecasts will always be human.


No hay comentarios:

Publicar un comentario