In my case, the answer is almost always yes. There's an awkwardness to it, like I need to preemptively defend how long something actually took.
Here's what "I used AI" typically means for non-trivial work:
A few days ago I wrote a report (with AI) I thought would take thirty minutes. It took four hours. I debated structure, rewrote paragraphs several times, challenged sentences that felt off, and cut text that diluted the argument.
When writing code, the first prompt gets you 70% in five minutes. The next hour of tweaking gets you to 85%. If it's common code, you move fast. If it's unconventional, you finish it yourself. The code runs under my name. The bugs are mine to fix.
"Did you use AI for this?" So why does the question feel off? Because it's focused on the wrong thing.
Here's the asymmetry that makes AI valuable: generation is fast, verification is hard. It can take hours to write something from scratch. Minutes to read it and know if it's right. I can review ten versions of an argument faster than I can produce one.
This is why expertise matters more than ever. The better I understand a domain, the faster I can verify. I immediately spot when AI hallucinates, misunderstands a technical concept, or makes a logical leap that doesn't hold. I check whether the sources and logic are sound.
This is the difference between content and slop. Slop is output without verification. Content has accountability behind every word. This is why coding agents are so successful. They verify against themselves, tools, compilers and tests. Anywhere AI can design its own checks, results improve.
The bottleneck used to be "can you produce this?" Now it's "can you evaluate this?" The person who can verify fastest can iterate fastest.
Seen this way, the rising-baseline problem comes into focus. When everyone can generate competent first drafts, "good enough" becomes the new mediocre. What matters isn't whether you used AI, it's whether you pushed beyond what anyone with AI could produce.
This is why learning deeply still matters, more than ever. The depth is less for generation, more for verification and connecting dots.
We've been here before. Nobody asks "did you use Google for this?" anymore. Each tool in history, pen and paper, books, computers, the internet, once felt like a shortcut. Each one eventually faded into the baseline.
I'm waiting for AI to reach that same place.
Here's what AI currently does in my workflow: it connects fragments in my head faster, surfaces issues I might miss, and fills gaps I can quickly cross-check. It helps me bounce ideas against the rest of the internet. It has the knowledge of the world and the conviction of a four-year-old.
If AI helps me get there faster, good. If it fails, I start from scratch. The accountability sits with me.
Did I use AI to write this? Yes. It couldn't have written this without me. And I wouldn't want to write it without the tools that help me think.
#AI
Generated with Google Nano Banana.
2025-12-02