The extent to which AI predictions are likely to be inaccurate relative to other predictions is unclear. Reasons to suspect they would be that we have investigated seem weak, but we have not investigated all of the relevant considerations.
Predictions of AI timelines are likely to be biased toward optimism by roughly decades, especially if they are voluntary statements rather than surveys, and especially if they are from populations selected for optimism.
A number of reasons have been suggested for particularly distrusting AI predictions:
- Disparate predictions One sign that AI predictions are not very accurate is that they differ over a range of a century or so. This is not as straightforward or as damning as it may appear: very similar probability distributions can have very 'predictions' depending on exactly what is meant by 'prediction'.
- Similarity of old and new predictions+ Old predictions form a fairly similar distribution to more recent predictions, except for very old predictions. This is weak evidence that new predictions are inaccurate.
- Similarity of expert and lay opinions+ There is somewhat conflicting evidence on whether expert and lay opinions on AI timelines are substantially different+. Probably they are. This is weak evidence in favor of AI experts having relevant expertise.
- Model of areas where people predict well Research has produced a characterization of situations where experts predict well and where they do not. AI is purportedly a central case of where they do not. See table 1 here. We have not investigated this in depth.
A number of biases have been posited to affect predictions of human-level AI:
- Selection biases from optimistic prediction population Some factors that cause people to make predictions about AI are likely to correlate with expectations of human-level AI arriving sooner. Evidence from surveys suggests that people working in AGI are likely to be unrealistically optimistic about AI timelines, by roughly decades+.
- Biases from short-term predictions being recorded There are a few reasons to expect recorded public predictions to be biased toward shorter timescales. Overall these probably make public statements on the order of a decade more optimistic+.
- Maes-Garreau law The Maes-Garreau law is a posited tendency for people to predict important technologies not long before their own likely death. It probably doesn't afflict predictions of human-level AI substantially+.
- Fixed period bias There may be a stereotype that people tend to predict AI in 20-30 years. There is weak evidence of such a tendency around 20 years, though little evidence that this is due to a bias+.
These would complicate interpretation of predictions of AI. In our opinion, these are unlikely to substantially worsen predictions.
AI appears to exhibit several qualities characteristic of areas that people are not good at predicting. Individual AI predictions appear to be inaccurate by many decades in virtue of their disagreement. Other grounds for particularly distrusting AI predictions seem to offer weak evidence against them, if any. Overall, this weakly suggests AI predictions are likely to be less accurate than many things people predict, though quantifying this with current data seems very hard.
Biases toward early estimates appear to exist, as a result of optimistic people becoming experts, and optimistic predictions being more likely to be published for various reasons. This suggests predictions from researchers working very closely on human-level AI are likely to be particularly biased toward optimism, as are voluntary statements.