MIRI AI predictions dataset

The MIRI AI predictions dataset is a collection of public predictions about human-level AI timelines. We edited the original dataset, as described below. Our dataset is available here, and the original here


People say many slightly different things about when AI will arrive. We interpreted predictions into a common format: a claim about when AI would be less likely than not, and a claim about when AI would be more likely than not. For many predictions, these met at a single date. Some did not, because the person was not talking about the point at which AI would change from being less likely to more likely, but instead for instance about some date when they felt it would still be unlikely. Some predictions were interpreted only as a claim about times when AI would be likely, or as a claim about times when AI would still be unlikely.

This is different to the 'median' interpretation that came with the dataset. Either of these could produce misleading 'predictions' in different circumstances. People who say 'AI will come in about 100 years' and 'AI will come within about 100 years' probably don't mean to point to estimates 50 years apart. On the other hand, our measure overestimates the real 50th percentile time estimate, since often people do not appear to be stating the earliest time at which they thought AI was more likely than not. However it seems unlikely to overestimate it in general more than about a year, since the maximum date at which people said AI was less likely than not has the same mean and median, to within two years (see sheet 'Basic Statistics').

Throughout this page, we use 'AI prediction' to refer to the minimum time when a person is interpreted as stating that AI is more likely than not. We use 'No-AI prediction' to refer to the maximum time when a person is interpreted as stating that AI is less likely than not. These are not necessarily the earliest and latest times that a person holds the requisite belief - just the earliest and latest times that is implied by their statement. For instance, if a person says 'I disagree that we will have human-level AI in 2050', then we interpret this as a no-AI prediction of 2050, though they may well also believe AI is less likely than not in 2060 also. 

'Early' throughout refers to before 2000. 'Late' refers to 2000 onwards. We split the predictions in this way because often we are interested in recent predictions, and 2000 is a relatively natural recent cutoff. We chose this date without conscious attention to the data beyond the fact that there have been plenty of predictions since 2000.

We categorized people as 'AGI', 'AI', 'futurist' and 'other' as best we could, according to their apparent research areas and activities. These are ambiguous categories, but the ends to which we put such categorization do not require that they be very precise.


We got the MIRI dataset from here. According to the accompanying post, the Machine Intelligence Research Institute (MIRI) commissioned Jonathan Wang and Brian Potter to gather the data. Kaj Sotala and Stuart Armstrong analyzed and categorized it (their categories are available in both versions of the dataset).

Changes to dataset

These are changes we made to the dataset:
  • There were a few instances of summary results from large surveys included as single predictions - we separated these from the main dataset, as survey medians and public predictions are different enough that they are probably better interpreted separately.
  • We removed entries which appeared to be duplications of the same data, from different sources.
  • We removed many predictions from the same people, made within a decade of each other (we have probably missed some still).
  • We added some predictions we knew of which were not in the data
  • We fixed some small errors
  • We removed some data which appeared to have been collected in a biased fashion, where we could not correct the bias (as discussed here).
Datapoints that we deleted can be seen in the last sheets of our version of the dataset, along with the reason for their deletion.

We continue to change the dataset as we find predictions it is missing, or errors in it. The current dataset may not exactly match the descriptions on this page.

How did our changes matter?

Some notable implications of the above changes:
  • The dataset originally had 95 predictions; our version has 66 at last count.
  • In the original data set, the mean "median" prediction was 2046. In our data set, the mean AI prediction (i.e. earliest time a person states that AI is more likely than not) is 2065. 
  • The median "median" was 2030 in the original data set, and the median "AI prediction" in our data set was 2035.


Basic findings

The median AI prediction is 2035 and median no-AI prediction is 2033 (see  'Basic statistics' sheet). 

The mean AI prediction is 2066 (see 'Basic statistics' sheet). There are quite a few extreme outliers influencing this figure. The mean no-AI prediction is the same to within a year: 2065.

The following figure shows the AI and no-AI predictions.

(from 'Basic statistics' sheet)

The following figures shows the fraction of predictors over time who believe(d) that human-level AI is more likely to have arrived by that time than not. The first is for all predictions, and the second for predictions since 2000. The first graph is somewhat hard to interpret, because the predictions were made at very different times. 

CDF of AI predictions from all groups at all times
(From 'Cumulative distributions' sheet)

(From 'Cumulative distributions' sheet)

Similarity of predictions over time

Earlier predictions appear to be slightly more optimistic than later ones. The correlation between the date of a prediction and number of years until AI is predicted from that time is 0.13 (see 'Basic statistics' sheet). The six very early predictions were all below the median 30 years. The largest difference between the fraction of early and of late people who predict AI by any given distance in the future is about 15% (see 'Predictions over time 2' sheet). A difference this large is fairly likely by chance. 

Time left until AI predictions, by when they were made.
(From 'Basic statistics' sheet)

Cumulative probability of AI being predicted over time
(From 'Predictions over time 2' sheet)

Distance to predictions, for early and late predictors
(From 'Predictions over time' sheet)

Cumulative probability of AI being predicted by a given date, for early and late predictors.
(From 'Cumulative distributions' sheet)

Groups of participants

Associations with expertise and enthusiasm


AGI people in the data are generally substantially more optimistic than AI people. Futurists are generally a little more optimistic than AGI people. Other people are generally substantially less optimistic than anyone.


We classified the predictors as AGI researchers, (other) AI researchers, Futurists and Other, and calculated CDFs of their AI predictions, both for early and late predictors. The figures below show a selection of these. 

As we can see, Late AGI predictors are substantially more optimistic than late AI predictors: for almost any date this century, at least 20% more AGI people predict AI by then. The median late AI researcher AI prediction is 18 years later than the median AGI researcher AI prediction. There were only 6 late futurists, and 6 late 'other' (compared to 13 and 16 late AGI and late AI respectively), so the data for these groups is fairly noisy. Roughly, late futurists in the sample were more optimistic than anyone, while late 'other' were more pessimistic than anyone.

There were no early AGI people, and only three early 'other'. Among seven early AI and eight early futurists, the AI people predicted AI much earlier (70% of early AI people predict AI before any early futurists do), but this is explained by the early AI people being concentrated very early. 

Cumulative probability of AI being predicted over time, for late AI and late AGI predictors.
(See 'Cumulative distributions' sheet)

Cumulative probability of AI being predicted over time, for all late groups.
(See 'Cumulative distributions' sheet)

 Median AI predictions AGI AI Futurist Other All
 Early (warning: noisy)    1988 2031 2036 2024
 Late 2033 2051 2030 2062 2041
Median AI predictions for all groups, late and early. Note that there were no early AGI predictors.

Statement makers and survey takers


Surveys seem to produce later median estimates than similar groups of predictors in the MIRI data, which is largely statements. The difference appears to be on the order of a decade.


Surveys and voluntary statements are likely to be subject to different selection biases. To learn about the extent of these, we below compare median predictions made in surveys to median predictions made by people from similar groups in voluntary statements. 

Note that this is very rough: categorizing people is hard, and we have not investigated the participants in these surveys more than cursorily. The MIRI data on Other predictors is from a very small sample size. The results in this section are intended to provide a ballpark estimate only.

SurveyPrimary participants  Median prediction in comparable statements in the MIRI data Median in survey

 Kruel (AI researchers) AI 2051 2062
 Kruel (AGI researchers) AGI2033 2031-2
 AGI-09 AGI 2033 2040+7
 FHI AGI/other 2033-2062 2050in range
 Klein Other/futurist 2030-2062 2050in range 
 AI@50 AI/Other 2051-2062 2056in range
 Bainbridge Other 2062 2085+23

Note that the Kruel interviews are somewhere between statements and surveys, and are included in both data. 

It appears that the surveys give somewhat later dates than similar groups of people making statements voluntarily. Around half of the surveys give later answers than expected, and the other half are roughly as expected. The difference seems to be on the order of a decade. This is what one might naively expect in the presence of a bias from people advertising their more surprising views+

Relation of predictions and lifespan

Age and predicted time to AI are very weakly anti-correlated: -.017 (see Basic statistics sheet, "correlation of age and time to prediction"). This is evidence against a posited bias to predict AI within your existing lifespan, known as the Maes-Garreau Law.