Support bill S.2164 and our proposed bill, The Responsible AI & Robotics No-Harm Act.
The moment we've been waiting for has arrived. Senator Mark Kelly has unveiled his comprehensive "AI for America" roadmap: a bold federal initiative that could fundamentally reshape how artificial intelligence serves the American people. This isn't another tech industry wishlist. This is a people-first approach that demands accountability, transparency, and shared prosperity from the companies profiting from AI's explosive growth.
We're analyzing every detail, and here's what it means for the future of safe, accountable artificial intelligence.
Senator Kelly's roadmap delivers on a core Safe AI Coalition demand: making AI companies pay their fair share. The proposed AI Horizon Fund represents a seismic shift from the current model where tech giants externalize costs to society while capturing massive profits. Under this plan, leading AI companies would contribute substantial resources specifically for worker retraining, community infrastructure, and responsible AI development.
This is accountability in action. For too long, we've watched Silicon Valley build trillion-dollar empires while leaving displaced workers, strained communities, and overwhelmed public services in their wake. Kelly's framework forces these companies to invest directly in the solutions society desperately needs.
The principle driving this approach couldn't be clearer: "AI should be a benefit to all, not a detriment to most while creating record wealth for a select few," as former U.S. Secretary of Labor Julie Su declared in supporting the roadmap. This is exactly the kind of moral clarity we've been championing.
The roadmap's "people before machines" philosophy directly aligns with our mission to ensure AI empowers rather than replaces human potential. Kelly's plan recognizes that the AI revolution is already transforming the job market: but it refuses to accept that workers should bear the costs alone.
We're seeing real solutions for real people. The proposal specifically addresses preparing young people for AI-integrated careers while providing comprehensive retraining for workers struggling to find employment as their industries evolve. This isn't abstract policy: it's concrete action for millions of Americans facing AI disruption right now.
The broad coalition supporting Kelly's approach tells the story: Arizona AFL-CIO President Jim McLaughlin, university presidents, and civil rights leaders are all backing this worker-centered vision. When labor, academia, and advocacy groups unite behind AI policy, we know we're on the right track.
Kelly's roadmap tackles the elephant in the room: AI's massive energy and infrastructure demands. Data centers powering AI systems are consuming unprecedented amounts of electricity and straining local utilities. Unlike approaches that simply call for extracting more fossil fuels, this plan demands that AI companies contribute to sustainable infrastructure solutions.
This is environmental justice meeting technological progress. Jon Shirley, former president and COO of Microsoft, endorsed the proposal specifically for addressing "the real challenges surrounding AI," including infrastructure demands that "companies or the market cannot solve alone."
We cannot allow AI development to proceed without addressing its environmental impact. Kelly's framework ensures that the companies profiting from AI also invest in the infrastructure their technology requires: rather than leaving communities to bear these costs.
Let's be clear about what we're celebrating: Senator Kelly's roadmap represents unprecedented federal leadership on AI accountability. The emphasis on independent oversight, public benefit funding, and worker protection directly reflects years of advocacy from organizations like ours.
The AI Horizon Fund mechanism is particularly promising because it creates ongoing funding streams tied to AI company revenues rather than relying on one-time appropriations. This sustainable approach ensures that as AI profits grow, so do investments in public benefit.
The focus on transparency and public reporting standards aligns perfectly with our demands for watchdog capabilities and independent auditing of AI systems before deployment.
But we must be honest about where this roadmap falls short of the comprehensive action our moment demands.
Enforcement remains the fundamental weakness. While the roadmap proposes valuable frameworks, it still relies heavily on voluntary industry partnerships and good faith compliance. History teaches us that voluntary measures from Big Tech are insufficient protection for the public interest.
We need binding regulations with real penalties for non-compliance. Independent audits must be mandatory, not optional. Off-switch capabilities for dangerous AI systems must be legally required, not merely encouraged through industry partnerships.
Privacy protections and civil rights safeguards need significant strengthening. The roadmap doesn't adequately address surveillance applications, algorithmic bias, or the fundamental privacy violations enabled by current AI development practices.
Open-source model releases remain dangerously under-regulated. The plan lacks specific provisions for preventing the release of potentially hazardous AI models without proper safety testing and containment protocols.
You can read the complete roadmap here: https://www.kelly.senate.gov/wp-content/uploads/2025/09/KELLY-AI-FOR-AMERICA_924.pdf
The Safe AI Coalition endorses Senator Kelly's roadmap as a crucial first step: while maintaining our commitment to pushing for stronger, enforceable protections. This proposal represents the most comprehensive federal approach to AI accountability we've seen, but it's not the finish line.
We're supporting this roadmap because it advances core principles we've long championed:
Corporate accountability for AI's societal impact
Worker protection and economic equity
Infrastructure responsibility
Public benefit over private profit
Transparency in AI development and deployment
But we're not stopping here. Our proposed legislation goes further in demanding mandatory safety protocols, enforceable privacy protections, and binding oversight mechanisms that this roadmap only begins to address.
Senator Kelly's roadmap proves that federal action on AI accountability is not only possible: it's inevitable. The question isn't whether we'll regulate AI development, but whether those regulations will be strong enough to protect the public interest.
Every voice matters in this fight. As this roadmap moves through the legislative process, we need sustained pressure for stronger enforcement mechanisms, comprehensive privacy protections, and mandatory safety protocols.
The AI revolution is happening with or without our input. The choice we face is simple: Will AI serve everyone, or just the powerful few?
Senator Kelly's roadmap takes us closer to the former vision. But reaching that goal requires all of us to stay vigilant, stay engaged, and keep demanding the safe, accountable artificial intelligence our democracy deserves.
Ready to join the fight? Visit our website to learn how you can support comprehensive AI safety legislation and hold tech companies accountable to the public interest.
The future of AI is still being written. Let's make sure it's a future that works for everyone.