Objective:
To give students an exposure on building trustworthy models using Explainable AI methods.
Description:
The speakers elucidated the concept of XAI, emphasizing its role in making AI systems transparent and understandable to users. She highlighted the need for XAI techniques to enhance trust and facilitate user empowerment.
She has explained different explainableAI techniques LIME, SHAP, Anchors. Case studies and examples were presented to illustrate how XAI is applied in various domains, including healthcare, finance, and autonomous vehicles. Attendees gained insights into how explainable AI techniques are integrated into practical systems to improve transparency and accountability.
Outcome:
Students were able to understand the need of Explainable AI techniques by developing the trustworthy models.
Objective:
To give an opportunity to the students to develop solutions using AI and XAI and provide a platform to
exhibit their work through posters.
Description:
The students are asked to prepare a poster on societal solutions using AI and XAI. Students have
prepared posters for the problem statements like Disease Prediction, payment fraud detection etc.
Students have explained different machine learning algorithms and explainable AI techniques LIME, SHAP, Anchors used in their work. This gives an exposure to the students to understand how XAI is applied in various domains, including healthcare, finance, and autonomous vehicles.
Outcome:
Students were able to illustrate the Explainable AI techniques by developing the trustworthy models.
Objective:
To preprocess unconventional datasets, tools, or resources to solve AI-related challenges.
Description:
Students work on given unconventional datasets or data sources that may not typically be used in AI projects (e.g., scraped data from social media, obscure public datasets) and clean the data. The targeted students are Beginner/Middle level in ML/AI.
Outcome:
Students were able to make use of using python to clean the dataset and were able to illustrate the different methods involved in preprocessing
Objective:
The primary objective of the programming puzzle-solving event was to enhance students' problem-solving skills by challenging them to solve a variety of coding puzzles. This event aimed to foster critical thinking, encourage collaboration, and promote the practical application of programming concepts in a competitive yet engaging environment.
Description:
Participants were presented with a set of programming puzzles, ranging from basic to advanced levels.
Key highlights of the event included:
Variety of Puzzles: Challenges included number-based logic puzzles and string manipulations.
Language Flexibility: Students were allowed to use any programming language of their choice, such as Python, Java, C++, or JavaScript, to solve the problems.
Outcome:
Students were able to code and illustrate theprogramming puzzles
Objective:
The objective of this debate is to critically analyze the ethical implications of AI-powered systems, addressing concerns such as bias, privacy, accountability, and the impact on human decision-making. The event aims to encourage constructive discussions on the responsibilities of AI developers, policymakers, and users in ensuring ethical AI deployment.
Description:
Students participated in discussion both for and against various ethical concerns in AI systems. Topics of discussion will include algorithmic bias, data privacy, transparency, job displacement, and AI regulation. The format will include structured arguments, rebuttals, and an interactive Q&A session to engage the audience in critical thinking about the moral responsibilities associated with AI technologies.
Outcome:
By the end of the debate, participants and attendees will gain a deeper understanding of the ethical challenges posed by AI-powered systems and potential solutions to mitigate risks. The event will encourage informed decision-making, promote responsible AI practices, and inspire further discussions on ethical AI governance. A key takeaway will be the formulation of ethical guidelines or recommendations for AI development and deployment.
The session began with an introduction to Explainable AI, emphasizing its importance in building trust, transparency, and accountability in AI systems. Key XAI techniques such as SHAP, LIME, and counterfactual explanations were discussed, along with their role in regulated sectors like healthcare, finance, and law. Moving into Generative AI, the speaker highlighted how models like GPT, DALL·E, and diffusion models are transforming content creation, code generation, and design workflows. The discussion covered both the creative potential and ethical challenges of GenAI, including issues like deepfakes, bias, and data provenance.The concept of Agentic AI was then explored, focusing on intelligent systems that act autonomously to achieve user goals. Examples included AI agents capable of task planning, reasoning, and multi-step decision-making. Tools like Auto-GPT, LangChain, and ReAct frameworks were introduced, showing how agentic systems integrate LLMs with memory, tools, and planning strategies.