Machine Learning Magic: From Data to Decisions with Bayes' Theorem
On November 1st, the LUMS Math Circle hosted an engaging session titled "Machine Learning Magic: From Data to Decisions with Bayes' Theorem," led by Dr. Agha Ali Raza and Dr. Waqas Ali Azhar. The session focused on key concepts in probability and conditional reasoning, exploring their applications through Bayes' Theorem and Naive Bayes. The activities were designed to promote hands-on learning and encourage interactive problem-solving.
Key Highlights
Revisiting Probability Concepts:
Participants began by revisiting the basics of probability, such as calculating the likelihood of outcomes in scenarios like coin tosses, dice rolls, and candy selection.
The distinction between independent and dependent events was explored with practical examples. For instance:
Rolling a die multiple times demonstrated the independence of consecutive events, where the outcome of one roll does not influence the next.
The gambler’s fallacy was highlighted to address the common misconception that past outcomes (e.g., not rolling a 6 in several tries) increase the likelihood of a specific outcome in future rolls. This reinforced the idea that the probability of rolling a 6 remains constant at 1/6, no matter how many times the die is rolled.
Joint Probability:
Through activities like "Coin and Spinner" and "Colorful Socks," participants calculated the probabilities of simultaneous events. These exercises illustrated how joint probabilities account for overlaps in outcomes and highlighted the concept of event independence.
Conditional Probability:
Scenarios such as "Classroom Pets" and "Game Night" introduced conditional probabilities, teaching participants how prior knowledge can update the likelihood of events. For instance:
In the "Classroom Pets" problem, attendees calculated the probability that a student with a pet dog also had a pet cat, using the given formula.
The "Game Night" scenario demonstrated conditional reasoning in a fun, relatable context, showing how probabilities of outcomes adjust under different assumptions.
Bayes Theorem:
Using relatable examples such as the "Detective Problem" and "Lost Pet," participants applied Bayes Theorem to update probabilities based on new evidence:
In the "Detective Problem," attendees calculated the likelihood of Alex being the culprit given that the suspect wore glasses, reinforcing the theorem’s practicality in forensic scenarios.
The "Lost Pet" problem explored determining whether a missing pet was a dog or a cat based on observed barking, emphasizing the theorem's use in decision-making under uncertainty.
Educational Impact
The session offered a thorough and engaging look at probability and Bayes Theorem, giving participants a solid grasp of both theoretical concepts and their practical uses. By combining teaching with hands-on activities, the session encouraged a greater appreciation for mathematical reasoning and its importance in decision-making and intelligent systems.
The interactive worksheets, crafted around real-world situations, improved learning by allowing participants to practice on their own and apply their knowledge in creative ways. These problem-solving tasks helped clarify complex ideas and made probability both accessible and enjoyable.
Acknowledgments
The session was organized with the collaborative efforts of Miss Noreen Sohail, Mr Qamar Hussain, and Mr. Javaid Qayyum (writer of this email).
Special thanks to Dr. Agha Ali Raza for delivering an insightful and engaging session, and to the LUMS Math Circle team for their dedication to promoting mathematical literacy and exploration.
Here are some highlights from the event: