In 2001, the Defense Advanced Research Project Agency (DARPA) started experimenting with methods for applying market-based concepts to intelligence. One such project, DARPA’s Future Markets Applied to Prediction (FutureMAP) program tested whether prediction markets, markets in which people bet on the likelihood of future events, could be used to improve upon existing approaches to preparing strategic intelligence. The program was cancelled in the summer of 2003 under a barrage of congressional criticism. Senators Ron Wyden and Byron Dorgan accused the Pentagon of wasting taxpayer dollars on “terrorism betting parlors,” and that “Spending millions of dollars on some kind of fantasy league terror game is absurd and, frankly, ought to make every American angry.”
Americans need not have been angry about FutureMAP. It was neither a terrorism betting parlor nor a fantasy league. Rather, it was an experiment to see whether market-generated predictions could improve upon conventional approaches to forecasting. Since 1988, traders in the Iowa Electronic Markets have been betting with remarkable accuracy on the likely winner of the US presidential elections. Eli Lilly, a major pharmaceutical company, found that prediction markets outdid conventional methods in forecasting outcomes of drug research and development efforts. Google recently announced that it was using prediction markets to “forecast product launch dates, new office openings, and many other things of strategic importance.”
Isn't it ironic that the mob was so wrong about prediction markets? FutureMAP should have anticipated that reaction.
The decision to cancel FutureMAP was at the very least premature, if not wrong-headed. The bulk of evidence on prediction markets demonstrate that they are reliable aggregators of disparate and dispersed information and can result in forecasts that are more accurate than those of experts. If so, prediction markets can substantially contribute to US Intelligence Community strategic and tactical intelligence work.