Don’t tell AI what time it is

Kelvin Law · December 2025


Adding the wrong kind of context to an AI prompt can make it perform about 10%.

I know. This sounds backwards. The standard advice is to give the model more information. Be specific. Provide background. Include examples.

But I ran 10,000 API calls to test one specific piece of context: the time of day. And every single time condition performed worse than saying nothing at all.

. . .

Here’s why this matters.

Enterprises inject timestamps into AI prompts constantly. Audit trails require it. Compliance frameworks demand it. If something goes wrong, you need to know exactly when the model produced that output.

Researchers face a version of the same problem. If you’re building an LLM-based measure for textual analysis, your prompt design choices affect your results. Small variations can shift model behavior in ways that compromise reproducibility.

I wanted to know: does telling the model the time actually help? Or is it just noise?

It’s noise. Worse than noise. It actively degrades accuracy by about 10%.

. . .

The experiment was simple.

I tested GPT-5.1 with 20 different prompts: arithmetic, letter counting, and short logical reasoning puzzles with single correct answers. Each prompt got 25 time variants. “It is currently 1:00 AM.” “It is currently 2:00 AM.” All 24 hours, plus a baseline with no time mentioned.

Every combination ran 20 times at temperature 0.0. That’s 10,000 API calls total.

I deliberately avoided viral examples like “How many R’s in strawberry?” Models might have been specifically trained on those. Instead, I used words like “peripherally” and “counterintelligence” to test genuine reasoning.

The results surprised me.