From Rob Henderson, The Happiness-Accuracy Tradeoff and the Limits of Rationality.
I am going to have to agree with Taleb that forecasting and also super-forecasting fails to rise to the level of a science. It’s interesting, but can it be made rigorous in a scientific or quantitative sense? Maybe not. Super-forecasting is supposed to be better than just forecasting, but I’m also skeptical. I believe that possibly some people are better at forecasting compared to others, but I think there are two major problems:
1. goalpost moving
2. the inherent difficulty of ascertaining skill
He writes:
.Additionally, Tetlock and his colleagues found that for strong forecasting ability, a commitment to belief-updating and self-improvement is three times more powerful than intelligence.
This whole ‘self-updating beliefs’ and ‘self-improvement’ thing seems like goalpost moving. Either the forecast is right or wrong. If you are adjusting, then this makes the implicit assumption that the independent events are actually correlated, no? Like, let’s say Google stock is at $100. You predict it will go to $110 next week, but it falls to $90. So, do you update your belief and revise your prediction to $80, extrapolating the trend into the future? Or do you just admit you were wrong? Or do you admit you were wrong for this specific week, but still believe Google will get to $110 eventually (maybe in a month from now?). Any one of these is valid. The fact that it is so hard to agree upon the terms of a prediction or forecast, is why it’s hard to make this rigorous in any scientific sense.
I think at best we can come up with heuristics that can allow one to make make more accurate forecasts, but it’s also trivial to inflate one’s overall accuracy by making a lot of easy predictions. But this possibly also creates a false sense of epistemic certainty. For example, financial trends can persist for a very long time, but then things can also change abruptly, like inflation surging in 2021-2022 despite very low inflation from 2008-2021. So predicting low inflation every year gives a 13/13 (100%) accuracy from early 2008 to early 2021, which is a very good track record, only to go kaboom in 2022. So how does one reconcile this from a super-forecasting standpoint? I dunno.
Is it better to get lots of smaller predictions/forecasts correct, but miss the big ones, or get the big ones right but at the cost of accuracy? Someone who predicts nuclear war every year will look like a genius if nuclear war does actually happen, but otherwise will have a terrible track record. Someone who predicts the opposite has a great track record but is wrong where it matters most. Who is the better or more skilled forecaster? Likely neither.
Or also consider that every year a pundit predicts ‘X’ will happen, such as financial crisis, war, inflation, etc. If after 2 or so years X fails to happen, the pundit stops making the prediction, but a new pundit comes along and makes the exact same type of prediction. Eventually the surviving pundit whose prediction coincides with the event happening, will again look like a genius even though it was just luck.