Wednesday 10 September 2014

The science of the future

No-one can predict the future. Every sensible person knows that. Throughout history some very clever people have let their cleverness get the better of them, and have decided that they can discern the future from looking closely at the past and the present -- but they've always been wrong. At least in the case of massive messes of human activities, like economics and geopolitics. Any good scientist knows what's going on here: you have an extremely complex system -- no, many many extremely complex systems -- played out by millions or billions of players (at the very least; don't forget all of the neurons in each of the billions of peoples' brains), and they are interacting in an unknown and almost certainly highly nonlinear way. Of course no-one can predict what's going to happen next.

Right?

Maybe not.

On Friday Tim Harford published an article in the Financial Times on a "groundbreaking study" that suggests that it is in fact possible to predict the future.

Before you get too carried away, here are the qualifying statements. The jumping-off point is a 2005 book by Philip Tetlock, "Expert Political Judgement", which collected 18 years of statistics on the accuracy of expert predictions in economics and geopolitics, and concluded that they were, across the board, appalling. If you're spiritually aligned with the opening paragraph above, this will come as no surprise -- but there's a big difference between knowing that something is "obvious", and collecting the data to demonstrate it.

Harford made much of Tetlock's study in his book "Adapt", and it was this I had in mind when I made sure to distinguish between scientific predictions and "expert predictions", or forecasts, in my series of posts on scientific expertise and climate change [1].

Now for the new stuff. Since then Tetlock and others have been running a study called the  Good Judgement Project. The idea is to see if accurate forecasts really are possible, and if people can be trained to make them. And the claim, as reported by Harford, is, Yes, they can. If forecasters focus on specific questions (i.e., not "Will Russia take over Ukraine?" but, "Will Russia take over Ukraine before November 1, 2014?"), and collect continuous feedback on the success of their predictions, and if the most successful forecasters are then organized into teams and their predictions aggregated -- then, they claim, much more accurate forecasts are possible. These teams of "superforecasters", "can predict geopolitical events with an accuracy far outstripping chance."

The article doesn't say what "far outstripping chance" means. No examples are given. I can't find any published results. (Although I have hardly performed a rigorous literature search.) A 45-minute video of Tetlock at edge.org provides a little more detail, but again without examples of just what it means to be a superforecaster. On what sorts of questions do they do better than chance, or better than regular "experts"? And what does better mean, statistically?

But let's assume for now that this all holds up and that we really can train people to make political and economic predictions at a level of reliability that was hitherto impossible. Let's ignore silly philosophical objections, like, "Isn't your prediction going to change the outcome?" Let's consider simply the implications of a discovery that the outcome of complex interactions of human populations could be forecast with an accuracy better than random chance.

If this were true, what could be going on? Can this make sense in the complex physical universe we currently understand ourselves to live in?

Here's one way to think that it could.

Human beings with a lamentable ignorance of Newtonian physics can perform extremely well at ball games -- they can calculate trajectories with incredible speed and accuracy. No matter what they may claim, they cannot do this naturally: they learn it through practice. After all this practice they have no more understanding of Newton's laws of motion than when they started, their brains have just extracted a pattern from repeated experience. But just because they don't know the rules behind what they have learnt, the fact that there was a pattern to extract means that some rules must exist.

People could learn how to catch a ball for thousands of years before Newton was born. If people can extract meaningful patterns from political and economic events, then these must also have rules that we could one day learn.

Or another example, which may be better. You can record the sound of a crowd of people talking at a party. If you analyze the sound levels in the room, there is no clear voice: all of the talking together adds up to noise. But the human ear, aided of course by the human brain, can distinguish a familiar voice in that crowd, and listen to what it says. We have been able to do this for thousands of years, but it was only last century, with the appearance of digital recording technology and clever signal processing techniques that we could do the same thing with artificial technology. And now the technology can do it better than us: your phone can recognize a song playing in a noisy bar, before you do.

Just because the laws that govern politics and economics may be largely unknown to us, that doesn't mean they don't exist. On the contrary: if people can be trained to make accurate predictions of these events, then there must be some simple laws behind them. The "superforecasters" don't know what those laws are, but they've picked up the patterns that those laws leave in their wake.

Think of chaos theory. Somewhere between simple linear physical effects, like a collision between two rubber balls, and the effective randomness of billions of those balls rattling around in a box (i.e., the statistical physics of gases), we have complex phenomena that can be described by relatively simple rules: a simple algorithm can generate the shape of a fern leaf.

Chaos theory ultimately did not prove very useful in economics: uncovering the beautiful patterns wasn't the same as predicting what the market would do next week. So, do these new results suggest that there is another idea out there? A whole new branch of statistical analysis or even of physical science, which would allow us to predict what, until now, has been the unpredictable?

Or is it all going to turn out to be a lot of hot air?

With no published results, no data to consider, and no useful expertise in this field, I am not qualified to make a prediction. All I can do is offer the most conservative guess: the probability that superforecasters are a real phenomenon is 0.5. Meaning: I have no idea if they are real.

But it is certainly a fascinating and worthwhile enterprise to try and find out.

Note
1. My climate-change meanderings follow a circuitous route through eight posts. They start here, but the summing up of my main argument is in the last one, here.


6 comments:

  1. The late, great Iain M Banks had one such superforecaster in "Consider Phlebas", with some hints as to what the implications of one might be.

    ReplyDelete
    Replies
    1. I assume the implications were that high drama would ensue.

      Delete
  2. Good post, Mark (as always). It's an interesting distinction between predicting and forecasting. You might be interested in the approach taken by Douglas Hubbard in "How to measure anything", which looks at estimation, which sounds like a close cousin of forecasting, and how to make this more robust. http://www.howtomeasureanything.com/how-to-measure-anything-third-edition/ I'm not on commission - it's a really well thought through approach!

    ReplyDelete
    Replies
    1. Looks like some proselytising for the Rev. Bayes -- and I speak with no prior bias.

      Delete
  3. How did The Foundation Series's Hari Seldon not feature in this post?

    ReplyDelete
    Replies
    1. Sorry, this post was concerned with real human beings.

      But it's a good point. With the advent of chaos theory the whole premise of the Foundation series became fallacious and naive. Maybe now the Good Judgement Project will strike a blow back for Asimov.

      Delete

[Note: comments do not seem to work from Facebook.]