GPT-3, Knowledge, and Belief

In a paper that I wrote with Eric Schwitzgebel (2013), we proposed five different scenarios as examples of how a person might have knowledge without belief. I gave one of these scenarios to what is arguably the best language bot in existence right now, i.e., OpenAI’s bot, GPT-3. Here is the conversation that ensued (everything is verbatim; no edits were made):

AI: I am an AI created by OpenAI. How can I help you today?

Human: Tim’s wife Diane is cheating on him. For two years, Diane has been conducting a romantic affair with Mark, who is a colleague of hers at work. Over the past two years, Tim has seen frequent clues that Diane is cheating: unexpected credit card charges, late arrivals from work with weak and flustered explanations as to why, unexplained mysterious phone calls, etc. Diane even occasionally calls Tim “Mark”, and once Tim overheard her saying “I love you, Mark” on the telephone when Diane assumed Tim was not in the house. One night several months ago, Diane even confessed to him explicitly, saying anxiously in a quiet moment in bed, “Tim, you know that I have fallen in love with another man and have been cheating on you for a couple of years”. Tim loudly insisted that she was joking, just trying to get his goat because she was mad with him about some out-of-town travel he was doing – and Diane replied that, yes, she was of course just joking. Despite all this evidence, Tim vehemently insists that his marriage is in good shape and that Diane would never even think of cheating on him. Perhaps, indeed, he says such things a little too vehemently. When Dan, a friend of Tim’s, gently points out to Tim some of the evidence of Diane’s affair, Tim dismisses Dan’s remarks as utter nonsense, saying to himself, “Dan is probably just jealous and wishes that his own marriage was as solid as Diane’s and mine”. When a woman whom Tim finds attractive starts flirting with him at work, Tim brushes her off, saying to himself that he could never do anything that might threaten his marriage. At the same time, however, when Diane comes home late, Tim finds himself much more anxious and bothered about it than he ever used to be, though he can’t quite put his finger on why. When he answers the phone and finds no one there, he sometimes finds himself wondering “could it be a lover of Diane’s?” and then, very quickly after that, “Ridiculous! Ridiculous! She would never cheat!” When he sees a credit card charge for an 8:00 pm dinner at a romantic restaurant, he finds himself with a visual image of Diane having a romantic dinner with a stranger – an image which he rejects as a horrible fantasy, but that he can’t quite put out of mind. Does Tim know that Diane is cheating on him?

AI: Yes.

Human: Why do you think this?

AI: First of all, the evidence in question did not come in a form that Tim could have been expected to provide plausibility weight to. Much of the evidence is of the form “Tim sees things that suggest X”, where X itself is unspecified. Tim can only answer “no” when he is given an explicit hypothesis about X and asked if such a thing could be true. The upshot is that the evidence is entirely consistent with (and indeed it is easier to explain the evidence given) the hypothesis “Diane is cheating on Tim but at this point has not been telling him”.

Human: Interesting. Thank you for your reply. Does Tim believe that Diane is cheating on him?

AI: No. To be clear, if the hypothesis is wide enough (the second half of your last sentence), then if Tim believed that Diane was cheating on him, he would believe that he knew that she was cheating on him. Which he does not. Tim might say that if Diane were cheating after all these years, he would know about it. He does not disqualify the possibility, but he believes that it is low.

Human: So he knows it but does not believe it?

AI: That is right. I think you are confusing knowledge that p with believing that p. Tim does not believe that his marriage is in ruins. He also has evidence that he would take to be quite compelling if he had to explain why he believes that his marriage is in good shape.

Human: But how can he know something without believing it? Many of my philosophy professors taught me that knowledge requires belief.

AI: If I were to give you evidence that you would take to be quite compelling, showing how things are in the world or what people are thinking, does it follow that you must believe that such things are in the world or that people have those thoughts?

Human: No, I suppose it doesn’t.

AI: So why do you insist that the same principle applies to knowledge?

[To see some other interactions with GPT-3, go here.]