There is growing pressure in headlines, institutions, and tech culture to deploy AI. It is often framed as a race to build faster, adopt earlier, and automate more. But in public systems, where the stakes are high and the margin for error is low, the most important question is not how quickly we can implement AI. It is whether we have taken the time to design for intelligence in the first place.
This focus on purpose and design is guiding how I am preparing for AI Camp this August. The camp is hosted by Cal Poly and AWS and brings together students from California community colleges to solve real-world problems using generative AI.
I am honored to attend, and the opportunity is energizing. But my preparation is not centered on technical tools or model selection. It is centered on something more foundational: clarity.
"Before the code" doesn’t mean before someone types in Python, it means before we leap to technical solutions at all. It means stepping back to design systems where intelligence, human or artificial, actually fits.
Not long ago, I came across a headline from MIT Sloan that read, “Stop Deploying AI. Start Designing Intelligence.” I only saw a portion of the article before hitting a paywall, but the headline said enough. It captured something I had already been thinking about.
AI is a design decision, not an answer in itself.
This is not a rejection of new technology. It is a reminder that systems need to make sense before new tools are added. It means creating environments where intelligence, whether human or artificial, can function responsibly and effectively.
There is a quote I have kept close for a long time:
“If you believe technology can solve your problems, then you don’t understand technology, and you don’t understand your problems.”
— Bruce Schneier and Laurie Anderson
That quote shapes how I approach this work. AI is promising, but it is not a shortcut. It is a tool that only performs well when placed inside a system designed to use it wisely.
Years ago, someone told me that government should never be run like a private company. I agreed then and still do. Government is not built for profit. It is built for public service. That means the values, timelines, and acceptable risks are different.
Still, I believe government can learn from private-sector methods. Not by copying goals, but by adapting approaches.
We can use design thinking to define problems clearly before solving them.
We can borrow agile practices to experiment responsibly within legal limits.
We can gather user feedback to improve access, transparency, and trust.
While private organizations often focus on speed, public systems focus on stability, fairness, and alignment. That does not mean we have to be slow. It means we have to move with intention and integrity.
This concept applies to my daily work in the court.
A chatbot is not useful just because it answers questions. It is useful if it follows correct procedures, protects confidentiality, draws from approved sources, and reflects legal nuance.
A workflow tool is not smart just because it automates steps. It is smart if it supports sound decisions, identifies edge cases, and preserves accountability.
Designing intelligence means shaping how knowledge is applied, how decisions are supported, and how risk is managed. That work happens before any code is written.
Even if a system is outdated or near the end of its life cycle, the design layer must still make sense within that environment. Introducing AI without understanding the context leads to confusion instead of clarity.
My role is not to write code. It is to ask better questions and help shape where the code belongs.
What I bring is a systems lens. I understand how operations flow in public service, where automation helps and where it can backfire, and how to ask the right questions before building anything new.
At camp, I will focus on the deeper layers:
What is the problem we are trying to solve?
Why does this problem matter?
How can AI support better thinking rather than just faster execution?
Public systems do not need hype. They need alignment, clarity, and thoughtful design. They need people who take the time to understand how a solution fits into the system it aims to improve.
The value of AI is not in how advanced it is. It lies in how well it fits the purpose and people it is meant to serve.
When we overlook design, we create noise instead of progress.
Intelligence does not appear on demand. It grows where the design supports it.
That is where I am starting.
AI-assisted, but human-approved, just like any good front office move. Chat GPT the sixth person off the bench editor for this post. Every take is mine.