Event Information:
Event Title: Maryland AI Community of Practice (MDAI)
Date: Friday, October 10, 2025
Time: 10:00–11:00 AM (Eastern Time)
Location: Virtual (Google Meet)
Host: Maryland Department of Information Technology (DoIT) – AI Enablement Team
Organizer: Lauren Maffeo, Senior AI/ML Program Manager
Registration Confirmation: Attended via my working account's calendar registration (event PDF)
Summary of the Event:
The session showed how Maryland is turning responsible AI from an abstract policy into something people across government can actually use and understand. More than a hundred state employees from different agencies joined—people from technology, health, education, transportation, and local offices all in the same virtual room. The atmosphere felt practical and collaborative, not theoretical.
The AI Enablement Team explained that Maryland’s Responsible AI Policy and Implementation Guidance are not just paperwork, they’re step-by-step playbooks. Agencies are expected to define their use cases clearly, classify the level of risk, go through the statewide intake process, and keep watch on the system once it’s running. The idea is to make sure every AI project serves the public without overstepping boundaries around privacy or fairness.
One of the highlights was the new Governance Card on AI Transcription. It laid out how to use tools like Gemini for Google Meet and Copilot for Microsoft Teams responsibly. The card makes it clear that recording and transcription are low-risk uses, but that doesn’t mean it should be treated casually. Everyone in the meeting has to give consent first(Maryland is a two-party consent state) and private topics like HR issues, legal matters, or anything with personal data must stay off record.
The presenters also reminded everyone that AI-generated transcripts count as public records. That means agencies have to treat them like any other official document: secure storage, proper labeling, and compliance with retention schedules. Altogether, the meeting captured a real shift in culture. Maryland isn’t rushing in building AI. It’s building a slow, accountable system where agencies learn together and keep the human judgment firmly in charge.
Biggest Takeaway:
My biggest takeaway from the Maryland AI Community of Practice event was understanding how large-scale coordination makes AI ethics and regulation work in practice. As my topic for this challengemaker project focuses on AI ethics and regulation, this event showed what that actually looks like when implemented by government agencies. Hearing how Maryland uses policies, intake reviews, and risk classifications to keep AI systems accountable helped me see the importance of structure and oversight. It’s not just about using AI responsibly; it’s about having clear procedures, transparency, and shared standards across departments. Watching hundreds of professionals from different agencies follow the same ethical framework gave me a better sense of how real governance can turn abstract ideas about “responsible AI” into enforceable, measurable action.
Reflection Prompts:
I learned how the Responsible AI Policy translates into everyday government operations. My issue(ensuring ethical AI adoption) is deeply connected to these implementation steps: intake, classification, monitoring, and transparency. Maryland DoIT’s structured approach provides a real-world model for balancing innovation with accountability.
The session was successful in showing practical governance tools like the AI Transcription card, which made policy concepts tangible. The least successful part was limited interactivity due to the one-hour format; more Q&A time could have helped attendees exchange ideas.
Next time, I would join the AI Day event in Crownsville to network in person and see live demos of AI projects, which would deepen understanding of how other agencies are implementing these frameworks.
This event will help me apply state-level AI governance concepts in future research and policy projects. It showed how principles like equity, transparency, and oversight can be embedded in AI adoption plans, which are skills directly relevant to my studies.
How does Maryland evaluate bias mitigation in deployed AI systems over time?
What process exists for auditing vendor AI tools beyond initial intake?
Could similar AI governance models be adapted for higher education or non-state institutions?