Could ChatGPT foresee the war in Ukraine?

VERÖFFENTLICHT 23. JUNI 2023

Could ChatGPT foresee the war in Ukraine?

Link to the GERMAN version / Link to my ChatGPT on Ukraine June 2022 

ChatGPT is not an instrument for predictions, but a – quite powerful – text-based chatbot. There have never been claims, that with the help of ChatGPT the war in Ukraine could have been predicted. The „knowledge“ – that is the text base – as starting point of the machine learning system ChatGPT 2 is limited to facts before September 2021, half a year before Russia attacked Ukraine. Therefore my experiment is not quite fair: I know what happened after that time and put targeted questions to ChatGPT which it can only answer with the knowledge of 2021. The text of my experiment is in English .



What can we learn from the experiment about ChatGPT?


First of all it gives us an interesting insight into ChatGPT. It becomes very clear how much the answers of the system depend on the last actual text base.

This also opens up a number of questions: each machine-learning system needs a seed of sentences which are accepted and become the starting point for further learning. This is continued until a system based on a billion of sentences is rising, a system that can form grammatically and stylish well-formed new sentences. Depending on the starting base and restrictions built in during the programming quite different sentences will be qualified as acceptable or unacceptable. These restrictions are also applied to the answering behaviour of the system to user input.

The exact structure of the learning mechanism and the restrictions remain intransparent for the user. Sometimes it seems that programmers built in restrictions that stop explicit sexual content or manifestations of hate. Concerning political questions there seems to be a bias towards cautious liberal positions of the centre, while it will be argued against extreme and extremist positions.

While I like this, I also fear that these conditions could also be very different, if the text base includes much more extreme positions and if the learning process – when training with data as well as when interacting with users – could result in very problematic and manipulative answers.

All of this is not about categories like true or false. ChatGPT seems to be a consensus-oriented system, that works like a big integral functions over a huge database. I am not really fond of Heidegger, but his distinction between „man“ and „authenticity“ (Eigentlichkeit) is a good paradigm to describe the content of a chatbot versus real beings.

It is difficult to translate and explain Heidegger – so I used ChatGPT to find a good translation and explication of this concept. This is the result:

Heidegger’s distinction between „man“ and „authenticity“ can be translated and explained in English as follows:

Heidegger uses the term „man“ (German: „man“) to refer to the everyday, average person who is immersed in a conformist and inauthentic mode of existence. „Man“ represents the collective and anonymous way of being, where individuals are influenced by societal norms, expectations, and cultural values without critically reflecting on them. It signifies a state of being lost in the crowd, lacking individuality, and living a life driven by the demands of others.

On the other hand, Heidegger contrasts „authenticity“ (German: „Eigentlichkeit“) with the notion of „man.“ Authenticity refers to an individual’s ability to break free from the influence of the crowd and societal expectations, and to confront their own existence in a genuine and self-reflective manner. It involves an ongoing process of self-discovery and self-awareness, where individuals strive to understand their own unique possibilities, values, and choices.

In essence, „man“ represents a state of conformity and inauthenticity, while „authenticity“ represents a state of individuality and self-discovery. Heidegger emphasizes the importance of moving beyond the anonymous existence of „man“ and towards a more authentic way of being, where individuals take responsibility for their own lives and actively engage with their own existence.

Most of what is said consists of what „one is usually saying“. It would be far too exhausting to focus each communication towards the possibility of mutual understanding, so „one“ stays with „self-understanding“ commonalities. The Habermasian kind of communicative action is the „authentic“ exception, used when really needed to co-ordinate actions. When used interactively everybody can bring his own authenticity into ChatGPT. The more this happens the more frustrating become the answers of ChatGPT – because they stick to the common „one“ and do not develop its own authenticity.

ChatGPT insists not to value statements. It is possible to call the pros and the cons of many possible positions, but ChatGPT does not comment which alternative is the „better“ one.

But there are some basic values that are definitely valued by ChatGPT. It is regularly stressed that situations and human positions may change. ChatGPT explicitly admits that its statements may sometimes be wrong. Machine learning systems must be open – and that also means: open for error. When talking about human weaknesses like stupidity, lack of knowledge or lack of capacity to learn, ChatGPT gives the advice to be tolerant. This is a very valuable statement, but I am sure this is only possible because the programmer has explicitly built in such advice.

It makes me uneasy is much less the high performance of ChatGPT, but the fact that it maps human weaknesses and vulnerabilities so well: the vulnerability to fall prey to nice words where splendid formulations are taken for a proof of truth. ChatGPT like human beings is able to draw all registers of linguistic manipulation.

What can be learned from this experiment about the Ukraine war?

Nothing – but you can learn a lot about the change in perceptions and public opinion on the subject of Ukraine and Russia. Between „then“ – September 2021 – and today – June 2023 – the focus has not only been on Russia’s war of aggression against Ukraine, but on the entire history since 2014 at the latest, when Crimea was annexed, perhaps even since Putin took office. This history now appears in a new light. The first lesson is not to be captivated by snapshots, but also to see AI-supported statements as time-bound.

It turns out that the „common statements“, what „one“ says, have to be provided with a time stamp. What was considered correct at the time may have been a mistake from today’s perspective. Of course, some have always known it – but ChatGPT 2021 shows us that to err is not only human, it’s also a matter of dates. First of all, ChatGPT 2023 has to process that the possibilities from back then are now facts.

Updated answers would not only have to reflect the „Zeitenwende“ (turn of the era), but also the fact that history is rewritten after such upheavals. For a learning chatbot, the text base changes significantly. The question is whether ChatGPT can simply add the new texts – because they often contradict the old texts. Hegel would have enjoyed the dialectical game. The basis of the text represents his „Weltgeist“ (world spirit), which raises the contradictions to a new synthesis.

ChatGPT didn’t “know anything better” in 2021 than it does today, it reflects the general state of thinking at the time. The chat produces a photo of our reflection in the mirror of 2021. The comparison to today helps us see how – and perhaps why – we were wrong then. But the dynamic with which events are changing this thinking is not recognized. Therefore, we should not overestimate the predictive ability of chatbots.

When we realize how difficult it is to evade the zeitgeist that characterizes the body of texts, then we should approach our current text basis with more critical distance: tomorrow some things can be seen very differently. However, turning points with such deep upheavals are rather rare. At most times and with many issues, the foundation of ChatGPT remains very stable over time, stability risks become difficult to detect and require the ability to deal critically with the results of the chats.

At the time, the annexation of Crimea and the conflict in Donbas were seen as a regional conflict that should be contained. The Minsk process worked with supposedly useful fictions: Russia was treated as a „neuter“ and not a war party, while Ukraine was supposed to negotiate with the so-called „rebels“ in the Donbas. It was well known that these fictions were not true, but the goal of de-escalation was worth letting Putin save face. Even then, the Ukrainians did not believe in this strategy, and certainly not in such fictions!

In the meantime, Putin has unmasked himself. He treated the Minsk agreement as a scrap of paper. However, the entire development since 2014 appears as a deceptive manoeuvre in preparation for the war against Ukraine with the aim of first destabilizing and then destroying the neighbouring country. In retrospect, ChatGPT would have to change its text base as early as 2014 and maybe even earlier – probably by eliminating the deceptive Russian texts and inserting more imperialist and nationalist texts from Putin and his ideological environment.

ChatGPT can deliver very different results through changing the design of the text base and the learning algorithm. Of course, I’m wondering what ChatGPT 2023 would look like if Russian trolls or their western „fellow travellers“ had the opportunity to influence the text base. Can this be prevented at all? Are articles by panic propagandist Sergei Karaganov or threats by extremist ex-president Medvedev included in the text base? Can the programmer distinguish propaganda from other texts?

Turning times force you to make a decision. That doesn’t work „value-free“. If the „one“ is split – how does the ChatGPT programmer decide? Can restrictions stop the infiltration of propaganda? Can I trust the company whose AI I use? We must learn to read AI-generated texts even more critically than human-generated texts.

The answers in the chat at the time to the question of the Budapest Protocol of 1994 were entirely in line with „soft diplomacy“. The war in the Donbas began in 2014. The guarantees of the Budapest Protocol were already being systematically violated by Russia at that time: the sovereignty and territorial integrity of Ukraine, the promise of non-use of force and compliance with international law. Nevertheless, the two Western guarantor powers mostly left it at verbal expressions of sympathy. The US also helped Ukraine with training and equipment. Even that was called into question under President Trump. ChatGPT insists in its replies that the Budapest Protocol does not contain any military guarantees.

It is now recognized that any conflict resolution must be accompanied by effective security guarantees for Ukraine. These can certainly not be based on the toothless Budapest protocol. The lesson from the chat is: looking back at unsuccessful actions and incorrect assessments is instructive – with or without ChatGPT. We have to draw our own conclusions from this. A chatbot is not able to do that for us.

At that time, Ukrainian diplomacy was not very successful. ChatGPT shows that the paradigm of a balance of judgment towards the two conflicting parties prevailed at the time. Only the breach of the European peace order, which became abundantly clear with Putin’s speech on February 24, 2022, led to a slow, but then fundamental, paradigm shift.

The much-cited slogan „Change through trade“ was also presented as successful by ChatGPT at the time. Although it was rarely used by diplomats, this catchy formulation was popular in politics and especially in the media. ChatGPT gives examples of the positive effects of trade: China of all places is mentioned first, then South Korea and Taiwan – and to my amazement the change in Eastern Europe – which was caused by many factors, but probably least by trade.

ChatGPT has to be credited with the fact that from the point of view of the time, the answers were very differentiated, and the role of the different contexts was pointed out. Such contexts are often covered up by wishful thinking in political discourse.

The hope for a liberal and democratic policy brought about by economic development requires much patience. While we wait for the positive effects of „change through trade“ we may easily overlook undesirable developments.

By the end of the Yeltsin era, there were already two isolated elites in Russia: the „Siloviki“ – strong men with close ties to the security apparatus, often trained in the Soviet KGB – and the reformers, who, with the liberalization of the economy, wanted to overcome the weaknesses of the Soviet system, and make Russia a strong capitalist country. They lived side by side without much understanding for each other. When in doubt, Putin opted for the siloviki, even if the modernization of the economy got stuck.

It is interesting that the ideological development and the change in the world view of some Russian politicians, including Putin himself, can hardly be seen in ChatGPT 2021. The „cultural hegemony“ of extreme forces, which is becoming more powerful in the sense of Gramsci’s theory, is underestimated by ChatGPT, because such forces did not have a direct impact on politics.

But the backward-looking historical ideology, Putin’s eulogies for the far-right Mussolini admirer Ilya Ilyin from the 1920s, and the influence of today’s fascist ideologues like Alexander Dugin all contributed to the poisoning of the public opinion climate. That extends into Putin’s closest circle. ChatGPT 2021 still counted Medvedev among the „liberals“ in the Kremlin, but he is now one of the worst agitators. The lesson from this should be not to neglect the development of such opinion leaders in the analysis.

ChatGPT still says in 2021 that there is no evidence that Russia could claim all of Ukraine. Russian foreign policy – it said – is determined by geopolitical motives, the need for security and the impact of historical and cultural narratives. Then there is nationalism. But all these elements are put to the service of Russian imperialism. ChatGPT struggles with abstruse worldviews and signs of a loss of reality. Like human analysts, the chatbot underestimated how quickly the two could turn into a war. Obviously, Putin managed to fool the West about his imperialist dreams – ChatGPT 2021 shows us that the opinion climate of the “text base” was such that we wanted to be fooled.

It is interesting that ChatGPT 2021 clearly states the problems the EU had with its sanctions policy. The sanctions following the annexation of Crimea had not achieved their purpose. Scholz and Macron had warned Putin about much stronger sanctions immediately before the attack, but Putin thought that was mere rhetoric. Poland and the Baltic states assumed only American pressure could change Germany’s stance – and were disappointed when President Biden suspended sanctions on North Stream. The turning point or “Zeitenwende” was all the more drastic – which ChatGPT of course could not foresee.

The different positions on the North Stream Pipeline will be described in ChatGPT 2021. At that time, the thesis that Germany could be blackmailed due to its dependence on Russian gas and was therefore sacrificing the interests of Eastern Europeans was opposed to the hypothesis, particularly widespread in Germany, that Russia would not be able to endure the loss of income from a gas supply stop for long and therefore remain a reliable supplier.

The test for both positions came only after the Russian invasion of Ukraine. After the delivery stop, it became apparent that Russia could live with the loss of income for much longer. The price effect even led to additional income. In addition, the embargo was not respected everywhere. Doubts about Russia’s image as a reliable supplier rose already several years before. ChatGPT reflects this. But there were no consequences.

Conversely, Germany did not allow itself to be blackmailed. Political interests were not dictated by economic interests, even if the necessary adjustments took time. After all, Germany got through the winter better than initially feared. ChatGPT 2021 shows that hypothetical predictions were no help on this. How these questions will play out in the long term is difficult to answer today. You would need a chatbot that is not only text-based but can also access economic data.

In any case, the lesson is that the „intelligent“ appearance of an AI application in no way spares a classic fact-oriented analysis. The future combination of text-based and scientific-information-based AI could change a lot here.

ChatGPT did not have some surprising developments on its screen at the time: Iran’s role as a supplier of drones to Russia, Putin’s threats against Finland and Sweden, which led to both countries applying for NATO membership, the ambivalent attitude of Turkey and Hungary, such as it was also evident in the blockades against Sweden joining NATO. When asked about political predictability, British Prime Minister Harold Macmillan was quoted as saying: „events, my dear boy, events…“ – we have to go everywhere with the appearance of the „black swans“ so beautifully described by Nicholas Taleb. We must calculate with uncalculable events. A „mainstream“ oriented text base cannot contain that. A chatbot can help with the analysis – but it is just as little protected from surprises as any of us.

It is interesting that ChatGPT assumes that Russia would see Ukraine’s NATO accession as a „casus belli“ – i.e. as a reason for a war. Various analysts have also warned of this scenario. If, like Putin, one considers the world to be divided into zones of influence of the world powers USA, Russia and China, then the „intrusion“ of the USA into the Russian sphere is quite comparable to the Soviet Union’s intrusion into the American sphere in the Cuban Missile Crisis of 1962 In his speech on February 24, 2022, Putin called re-establishing a sort of “cordon sanitaire” in Eastern Europe and asked NATO to “withdraw”.

Such thinking was also prevalent in the West during the Cold War. That was the reason why western help against the brutal Russian interventions was not given on June 17, 1953 in the GDR, nor in 1956 in Hungary, nor in 1968 in Czechoslovakia. Today, many Eastern Europeans lament that the West abandoned them back then – but that was the logic of a world divided between two superpowers. Putin is still bound by this logic.

In my chat, ChatGPT admits that the common talk of „NATO expansion“ is wrong – it was about the exercise of the right to freedom of alliance by sovereign states, while NATO, by the way, did not make it easy for them to join. Putin wants to restrict exactly this sovereignty again and completely eliminate it for Ukraine.

ChatGPT adopts the common use of words and sentences – it is important not to allow false tongue strokes when dealing with machine learning algorithms and to examine them critically.

ChatGPT is reluctant to make military assessments in the event of war. Nevertheless, the assessments of which weapon systems would play a major role or the level of training of the two armies were quite plausible and mostly correct, even if Russia was somewhat overestimated. For the benefit of the AI, this is better than underestimating an enemy. When asked what it means for Ukraine to win a hypothetical war, ChatGPT lists four criteria for Ukraine’s moral gain – but the chat is silent on the battlefield. Essentially, this indicates that ChatGPT can express language-based moral gains, but not military gains. The certainty that “losing” would be catastrophic for Ukrainians is confirmed daily. ChatGPT cannot eliminate the uncertainty of what “winning” actually means.


On the question of the attitude of China and the non-aligned BRICS: India, Brazil and South Africa.


The justification for the policy of the non-aligned BRICS is foreseen quite well by ChatGPT, however the justifications look more coherent than they really are.

ChatGPT sees four motives for the attitude of South Africa, India or Brazil in a coming war in Ukraine: their own priorities, deviating from Western ones; historical ties, e.g. from anti-colonial movements; economic interests; and an understanding of multipolarity that balances major powers.

By emphasizing their own priorities when the United Nations Charter and basic human rights are being brutally violated in Ukraine, the three countries are breaking away from the universalism of the world system. The motive may be that they reject the hegemony of the West and above all of the USA in this world system – but they will have to ask themselves whether they want to give up the associated values at the same time.

The so-called „historical ties“ are usually based on previous close ties with the Soviet Union. Brazil didn’t have one, India was very close to the USSR under Indira Gandhi, but from a non-aligned position, South Africa’s ANC was supported by the Soviets in the spirit of anti-colonialism. Now the socialist Soviet Union no longer exists and a reactionary, neo-colonial, imperialist regime is in power in Russia, courting fascist ideologues and supporting far-right forces in Europe. This is ignored in most countries of the „Third World“ – probably mostly out of ignorance. In any case, the identification of today’s Russia with the formerly (allegedly) progressive Soviet Union is absurd.

Economically, Russia offers the three countries little – and thanks to the ruin caused by the war, less and less. Arms supplies were important for India, now India uses cheap oil supplies from Russia. But unlike China, there are no good reasons in any of the countries to jeopardize economic relations with the West for Russia. I wonder when the West will put this issue on the agenda.

The balance between the great powers comes from the time of the hegemonic superpowers, the first and second world, which is constitutive for the existence of a „third world“. I do not rule out that this dichotomy will develop again with the competition between the USA and China. But here, too, the ignorance of the actual world situation is amazing. ChatGPT then also reproduces such misjudgments because they are ultimately in the mainstream of the text base. India will certainly soon find itself competing for global power, especially with China, while Brazil and South Africa are certainly regionally important, but also overestimate their role in the world. For both, new dependencies on China are more dangerous than old dependencies on the US or Europe.

According to ChatGPT 2021, Chinese interests in a conflict are determined by the following factors: the geopolitical balance and the relationship with the USA in the Asia-Pacific region; the principle of non-interference; economic interests in both countries, Russia as well as Ukraine; and the impact on China’s international relations.

China assumes that the US wants to prevent the rise of China to a Pacific and a world power, both politically and economically. It was to be expected that China would not want to be hindered in its rise. It sees itself on an equal footing with the United States as a rule-maker of the world. ChatGPT gives no clear reasons why this should be a problem. There is certainly some information about different values – and ChatGPT apparently wants to remain as value-free as possible.

The principle of non-interference is being massively violated by Russia in Ukraine. China is in a dilemma here. The most convenient option for China would be for Ukraine to capitulate – then China can retreat to “non-interference” in the face of the ensuing repression and destruction of Ukrainian identity. The alliance with Russia is the greatest possible interference by China in Ukraine – and is not compensated for by the abstentions in the United Nations.

China’s economic interests in Ukraine – e.g. investment in the port of Odessa – are being damaged by the war. In 2021, ChatGPT overestimates the role of the economy in relation to politics. In the global conflict with the US, it is more important for China to keep Russia at its side – preferably as a weak junior partner – than to protect its investments in Ukraine, which it could hold on to even under Russian control.

However, China underestimates the loss of trust it is suffering in Europe since it believes that it will be allowed to act against European interests in a vital conflict. This will damage China’s economic interests in the long run.

The Taiwan problem also plays into the Chinese calculation. I didn’t deepen this question in my chat. But I asked ChatGPT 2021 the following question separately: Is the Chinese „one country – two systems“ policy still officially valid? Did it fail in Hong Kong, if so why? And would it be a solution for Taiwan?

The response says the doctrine remains the basis for Hong Kong’s autonomy, but after the crackdown on protests and riots, and most notably with the 2020 Security Law, autonomy is severely curtailed. These experiences have greatly increased scepticism in Taiwan about the “one state, two systems” offer. Incidentally, President Xi Jinping has not expressly ruled outa violent incorporation of Taiwan into the People’s Republic of China.

The experience of Russia’s brutal and cruel violence in the war against Ukraine has shown that talk of alleged Ukrainian brothers is cynical. Xi Jinping should take a close look at how Russian aggression is completing the formation of an independent Ukrainian nation in areas where there was previously a linguistic and cultural proximity to Russia. ChatGPT cannot depict such dynamics.

Most complex, however, are the implications for China’s international relations. China believes in pursuing a “Realpolitik” in the China-USA-Russia-Europe square. But it seems that China has a limited grasp of reality. Europe is seen as the least relevant factor. In the USA in the 1950s there was a debate “Who lost China?” – in the Chinese leadership Xi will have to ask himself the question “Who lost Europe?”.

In its fixation on the US-China antagonism, Beijing has become blind to the risks of Russian imperialism. During the 1972 Kissinger-Nixon visit to China, there were vague indications that the US had made it clear to Russia that a preemptive USSR strike against China would not be tolerated. Perhaps the hundred-year-old Kissinger should say something about it. Or Xi Jinping could do some research in the Chinese archives.

From ChatGPT’s point of view, the objective analysis of China’s interests and declared policy would not have allowed such strong support for Russia as was given by Xi Jinping. So other motives are suspected. Was it a misjudgment, an illusion – or the calculation that Russia became more dependent on China than ever before?

China can correct its mistakes – I don’t know if it wants to – and neither can ChatGPT know that. A chatbot is also not suitable for forecasting Chinese politics. But ChatGPT helps us because it also draws attention to aspects that we may not be focusing on at the moment. This is not reliable, but helpful.