Who I contact:
Nishant Shah – Senior Advisor for Responsible AI, State of Maryland
Email: nishant.shah@maryland.gov
How I contacted: Messaged him through our internal DoIT AI Enablement Team chat since I currently work there.
Solomon Abiola – Director, AI/ML Policy & Governance, State of Maryland
Email: solomon.abiola@maryland.gov
How I contacted: Reached out to him through the DoIT AI Enablement Team chat for an interview.
Raymond Bell – Director, AI/ML Product, State of Maryland
Email: raymond.bell@maryland.gov
How I contacted: Sent him a message in our DoIT AI Enablement Team chat to request an interview.
Lauren Maffeo – Senior AI/ML Program Manager, State of Maryland
Contact: Internal DoIT chat
How I contacted: Messaged her through our internal AI Enablement Team workspace.
Dr. Chirag Shah – Professor, University of Washington Information School
Email: chirags@uw.edu
How I contacted: Sent him an email asking to interview him about AI accessibility and equity in higher education.
Interview Questions / Responses:
1.How would you describe the issue of digital accessibility and equity in AI?
From my view, digital accessibility and equity in AI means making sure the tools, data, and interfaces we develop are usable and fair for people no matter their background, technical skill, or resources. It’s not just about “can they click it,” but “do they understand what it’s doing,” “can they use it comfortably,” and “is it built with them in mind.”
2.Why do you think this issue matters for organizations like government agencies and universities?
Agencies and universities serve diverse communities with different levels of access and capability, so if we deploy AI systems without thinking about who’s left behind, we widen the gap instead of closing it. Plus, when public institutions lead by example, they can set the standard for inclusive AI rather than reinforcing an uneven playing field.
3.What’s some background or context that helps explain how this issue developed?
Most AI tools were originally built for enterprise or research users, not for general staff in public service or education. Over time, this left gaps in digital literacy and confidence. In government and higher education, where budgets, priorities, and skill levels vary widely, that difference in readiness has created a clear divide in who can make effective use of AI tools.
4.What kinds of impacts do you think this issue has, and on which groups of people?
The impact shows up among employees and students who don’t have consistent training or support, as well as in smaller agencies and schools without dedicated AI resources. When tools are confusing or poorly introduced, some people benefit while others get left behind. That limits innovation and makes digital transformation uneven.
5.What do you see as the main causes of unequal access to AI tools or knowledge?
I’d point to three major causes: variance in technical literacy (both individual and institutional), uneven resource distribution (hardware, connectivity, training), and tool design that assumes a “standard user.” If you assume a high-tech background, you exclude many :(
6.From your experience leading the AI Enablement Team, what steps could make AI tools easier and more equitable for people without strong technical backgrounds?
We’ve found that plain-language documentation, role-based training tied to real job scenarios, and modular learning paths make the biggest difference. In government, we’re not building AI systems from scratch—we’re helping agencies adopt existing tools like Gemini or Copilot responsibly. The goal isn’t to teach everyone advanced AI concepts, but to give clear, practical guidance so employees know what these tools can do safely and effectively in their daily work. Meeting users where they are makes adoption smoother and more inclusive.
Summary of major takeaways:
In my conversation with Ray Bell, he emphasized that making AI accessible and equitable in government is primarily a human challenge, not just a technical one. His focus was on how state agencies promote and support existing AI tools, like Gemini and Copilot, in ways that meet employees where they are. He identified three main causes of unequal access: differences in technical literacy and institutional capacity, unequal resource distribution across departments, and AI tool design that assumes advanced users. Ray explained that the state’s approach centers on simplifying training and communication rather than reinventing technology. He described how plain-language guides, modular training, and role-based examples help non-technical employees use AI effectively. What stood out to me was how he framed equity as part of deployment—not as an afterthought. Even the best AI tools, he noted, can fail if they’re introduced without support or context. For my challengemaker focusing on AI accessibility in higher education, Ray’s points remind me that success depends on bridging knowledge gaps through thoughtful training and design. I plan to apply his insights by developing clear, step-by-step materials that help students and educators use AI confidently and responsibly. One question I still have is how universities can measure progress in closing their own “AI literacy” gap over time.
Interview Questions / Responses:
1. How would you describe the issue of digital accessibility and equity in AI?
It’s really about who gets to use these tools and who doesn’t. AI has a lot of promise, but if only people with resources, training, or time can use it well, then we’re not being fair. Accessibility isn’t just about the interface—it’s also about giving people the understanding and confidence to use the tools responsibly.
2. Why do you think this issue matters for organizations like government agencies and universities?
Because we serve everyone. If AI is rolled out in a way that only helps the people already comfortable with technology, we fail our mission. It’s also an ethics question: institutions should think about fairness, privacy, and transparency when they decide how to use AI tools.
3. What’s some background or context that helps explain how this issue developed?
A lot of AI came out of the private sector where the focus was on speed and innovation. When those tools reached public institutions, they met users who needed more support and clearer rules. That’s when the gaps started showing up—people wanted to use AI but weren’t sure how or if they even should.
4. What kinds of impacts do you think this issue has, and on which groups of people?
It hits people who have fewer opportunities to learn new tech, like smaller schools, departments, or staff without training budgets. If we don’t fix that, AI ends up helping the people who already have an advantage. That’s the opposite of what we want technology to do.
5. What do you see as the main causes of unequal access to AI tools or knowledge?
Training and awareness are big ones. There’s also the lack of clear guidance, people don’t always know what’s safe, ethical, or allowed. And sometimes the tools themselves are just built for experts, not everyday users.
6. From your experience shaping policy and governance for AI, what advice would you give educational institutions looking to integrate AI in an inclusive and ethical way?
Start with the “why.” Don’t add AI just because it’s new, add it because it solves a real problem and fits your community. Have clear policies, bring students and teachers into the process early, and talk openly about the ethics—bias, fairness, and data use. If people understand the purpose and boundaries, they’re more likely to use AI the right way.
Summary of major takeaways:
Talking with Solomon helped me see how ethics connects directly to accessibility. He said Maryland’s focus isn’t on building new AI tools but helping agencies use existing ones in fair and responsible ways. What stood out to me most was his point that public institutions have a duty to serve everyone, not just the people who already know how to use these tools. He also said the biggest challenge isn’t technology itself: it’s awareness and training. If people don’t know what AI can or can’t do, or if they’re unsure what’s ethical, they avoid it or misuse it. Solomon’s advice to “start with the why” made me think about higher education in the same way. Universities often push out new AI tools without explaining how they fit into learning or fairness. For my project, I want to focus on that piece, helping students and faculty understand AI before expecting them to use it. I liked his idea of building policies and open discussions around ethics instead of treating them as afterthoughts. It made me realize that accessibility isn’t only about design; it’s about communication and trust. After this interview, I’m curious how colleges can measure whether their approach to AI is not just efficient but also fair for all groups of students.