*by invitation only
Executive VP for Research and Professor of Computer Science
Columbia University
Co-Chair
Professor of Computer Science
University of Oxford
Co-Chair
Deputy Assistant Director
NSF
Executive Chair
EPSRC
Director National AI Initiative
White House
Chief Scientific Adviser
EPSRC
Zoubin Gharamani Google, University of Cambridge, Alan Turing Institute
Henry Kautz NSF
James Dracott EPSRC
Kathryn Magnay EPSRC
Kedar Pandya EPSRC
Liam Boyle EPSRC
Nina Cox EPSRC
Liz Kebby-Jones EPSRC
Rob Hicks EPSRC
Roxanne Nikolaus NSF
Vivienne Blackstone EPSRC
Wendy Nilsen NSF
Both Days:
15:00 - 19:00 UK BST
10:00 - 14:00 US EDT
07:00 - 11:00 US PDT
1) Two-Year Horizon Programs: What can be funded now to make a scientific or societal impact in AI in the next two years?
a. What AI challenges are important to solve immediately?
b. What potential breakthrough in AI could we make if we just had more funding in the next two years?
c. What AI challenges can we solve or make significant progress on in the next two years?
2) Long Term Programs: What are important directions for AI research for the long-term future?
a. What deep scientific questions should the AI community tackle?
b. What potential breakthroughs in AI could we make if we had long-term sustained research funding?
c. What are problems that are best or necessarily addressed by academia, those that go beyond the time horizon in industry?
d. Given the relative strengths of the US and UK, what should we be working on that adds maximal benefit to both nations by drawing on the strengths of each?
3) Big AI and Small AI: Many recent advances in AI have relied on Big Compute (e.g., massive GPU clusters) and Big Data, and as such, have largely been developed within industry.
a. Is “Big AI'' here to stay? If so, how can academia participate and contribute, and what is the role of government research funding? What then is the role of academic AI research, given that academia can and should think further ahead than industry can today?
b. Assuming Big AI is here to stay, what is the equivalent of the Large Hadron Collider for AI? With government support, should we build one (or two)? How can we ensure it is an open and shared facility? Or, should new models of engagement between academia and industry, perhaps incentivized by government funding, provide an alternative approach?
c. Looking ahead, is there the equivalent of ``the end of Moore's Law'' for AI, where adding ever more compute and more data will yield diminishing returns? Will AI inevitably be a Big Science, as astronomy and biology are?
d. Even if AI is a Big Science, is academia's niche to focus on Small AI, especially as we want to reap the benefits of AI on small data and/or small compute (e.g., resource-impoverished devices at the edge)?
4) Increasing and Diversifying Talent: How can we better “democratize AI” (US terminology) or “level up in AI” (UK terminology)?
a. What programs can the US and UK fund to cultivate more talent in AI, especially those in traditionally underrepresented and underfunded areas (geographic, gender, race, socio-economic)?
b. Are there any opportunities for collaboration between the US and UK in increasing the talent pool at all educational levels?
c. How can we best facilitate the porosity of people (students, postdocs, faculty) between nations. What logistical, administrative, or governmental impedances can we improve?