This tutorial addresses the urgent need for responsible research and development in the rapidly evolving field of Generative AI (GenAI). Designed for AI researchers and practitioners, it offers a comprehensive exploration of Responsible AI (RAI) principles tailored specifically to Generative AI technologies. Participants will gain a deep understanding of the ethical implications of advancements in large language models and generative systems, including critical issues such as bias mitigation, privacy preservation and beyond. We go beyond theoretical discussions by offering practical, research-oriented strategies, innovative methodological frameworks and hands-on labs. Participants will learn techniques for detecting and mitigating biases in Generative AI models and gain experience applying Responsible AI principles in real-world research scenarios.
Need for Responsible AI in GenAI: Highlight the potential risks of GenAI amid its explosive growth and adoption.
Responsible AI Principles: Explain the foundational principles of RAI
Practical challenges faced in RAI: Highlight the vulnerabilities and challenges of adopting RAI in real-world use cases
Mitigation approaches (ShieldGemma, LlamaGuard): Understand how these frameworks enhance the safety of LLMs
Lab 1: Protecting Sensitive Data in Gen AI Model Responses
In this lab, you will learn how to:
Access a pre-created Jupyter notebook in a Vertex AI Workbench instance.
Install Python packages for Vertex AI and Cloud Data Loss Prevention (DLP) API.
Generate example text with sensitive data using the Gemini 2.0 Flash model.
Define and run Python functions to redact different types of sensitive data in Gemini 2.0 Flash model responses using the DLP API.
Lab 2: Safeguarding with Vertex AI Gemini API
In this lab, you will learn how to:
Call the Vertex AI Gemini API and inspect safety ratings of the responses.
Understand the various types of security parameters.
Define a threshold for filtering safety ratings according to your needs.
Lab 3: End-to-end solution with RAI guardrails
This lab highlights how to build an end-to-end solution while taking RAI guardrails into consideration, these include:
Sending data to open models like Llama.
Adding guardrails by implementing ShieldGemma or LlamaGuard.
Evaluating content safety.
Employ one class of safeguards—content classifiers for filtering.
Lab 4: [OPTIONAL] Differential Privacy in Machine Learning with TensorFlow Privacy
This lab helps you learn how to use differential privacy in machine learning using TensorFlow Privacy.
Wrap existing optimizers into their differentially private counterparts using TensorFlow Privacy.
Practice checking hyperparameters introduced by differentially private machine learning.
Measure the privacy guarantee provided using analysis tools included in TensorFlow Privacy.
A distinguished AI practitioner at Google, Sharmila brings extensive expertise in spearheading AI and GenAI engagements across diverse industries. Her portfolio showcases successful implementations of cutting-edge AI solutions that address complex business challenges for organizations at various stages of digital maturity. As a thought leader, she has contributed significant research papers and technical blogs spanning healthcare, ecommerce, and manufacturing domains. Sharmila is passionate about knowledge sharing and regularly speaks at industry events, offering valuable insights from her experience in AI implementation and strategy.
An AI Engineering Lead at Google, Gopala has worked with and implemented State of the art technology in the field of AI for real world business use cases at scale. He has 4 Published patents to his name along with several technical publications including conference papers and blog posts, ranging from the field of software design to hardware manufacturing including embedded systems. He is also an open source contributor to several hardware driver repositories.