Add an events calendar above
Investigators: A.C. Fong, Ajay Gupta, Steve Carr, and Shameek Bhattacharjee.
Student Members: Sirwe Saeedi, Kyle Hammerberg, and Anubhav Rawal.
The basic premise of this project is captured by the following hypothesis:
Accessible, multimodal experiential learning can promote AI readiness that enables multidisciplinary STEM researchers to benefit maximally from AI technologies.
This was an ambitiuous proof-of-concept study aimed at rapidly testing our hypothesis. Initial learning modules were developed in record time to complement and leverage existing educational and research resources. A balanced approach was taken with equal emphasis placed on theory and hands-on activities. The balanced approach also extended to coverage of AI in what it can and cannot do. Specifically, we developed accessible learning modules to instill the concepts and practice of safe, secure, and reliable (SSR) AI in current and future users of advanced CI. The modules are loosely coupled, allowing multiple entry points and learners can take a mix-and-match approach to acquiring new skills and knowledge to suit their needs. The modules furthermore allow multi-/mix-mode learning allowing any combination of formal classroom and informal self-paced study. Both positive and negative examples, drawn from different STEM disciplines, were used to give learners a balanced view of AI.
SSR are three pillars that form a sturdy tripod to support trust in AI technologies. Trust in turn is fundamental for enabling effective human-AI collaboration. Harmonious and productive human-AI collaboration has the potential to alter the course of human history, for instance, by enabling scientists and engineers to tackle extremely complex and impactful problems. AI technologies can also profoundly change the ways people live, work, and play. Examples include climate change mitigation, waste management, delivery of fair and equitable healthcare, transportation, and educational services, clean water and food production, development of smart X (X = cities, factories, infrastructures, ...), robotic pets and companions, detection of harmful content and substances in virtual and physical worlds, etc.
Rapid development of the modules and the fact that AI is a fast moving field mean that there is always room for improvement. This website, and the materials contained herein, are therefore under construction and further refinement. For example, more hands-on activities will be developed and made available on Github and/or accessible via Colab. We welcome comments, suggestions, and contribution from like-minded researchers and educators. Please direct all inquiries to any of the core investigators.
Foundational Modules
Covariance_and_Correlation.pdf
LinearAlgebraPrimerConcepts.pdf
REU_Intuition_behind_Regression.pptx
BiasVariance-Tradeoff-Regularization-Cross-Validation.pptx
Intermediate Modules
Advanced Modules
Introduction_to_AdversarialAttacksNomenclature_MLSecurityWeaknesses_M9.pptx
PoisoningClassificationModels.pptx
VulnerableAI_inCrowdSensing.pptx
Link to project github
Add a photo above and a description for your event
Add a photo above and a description for your event
Add a photo above and a description for your event
Add a photo above and a description for your event
Add a photo above and a description for your event
Add a photo above and a description for your event