Speaker
Dr. Rodrigo A. Vargas-Hernandez, Assistant Professor, McMaster University, hosted by Dr. Terry Blaskovits
Title
Everything but the Kitchen Sink: Optimization and Inverse Design of Quantum Systems
Abstract
In this seminar from the Alberta Machine Intelligence Institute and the Department of Computing Science, Rodrigo A. Vargas-Hernández, Assistant Professor at McMaster University, explains that machine learning extends far beyond regression and classification and can serve as a powerful framework for optimization and inverse design in quantum systems.
He will present three works that illustrate how modern ML tools can be leveraged to design, calibrate, and simulate quantum systems. First, Bayesian optimization and automatic differentiation are used to calibrate physical models in quantum transport problems, including retinal photoisomerization and quantum heat transfer devices, enabling data-efficient optimization of expensive simulations while integrating experimental observables directly into the learning loop.
Finally, flow-based generative modeling is introduced as a mechanism to accelerate quantum simulation workflows. Flow matching is used in a meta-learning framework to predict near-optimal variational parameters for quantum circuits, reducing the need for costly gradient-based optimization of gate angles. Complementarily, GFlowNets are employed to generate efficient groupings of Hamiltonian operators, lowering the number of measurements required on quantum hardware.
Together, these works illustrate how a broad spectrum of machine learning methodologies — “everything but the kitchen sink” — can be integrated into a unified framework for optimization and inverse design in quantum systems.
Presenter Bio
Dr. Rodrigo A. Vargas-Hernández is an Assistant Professor in the Department of Chemistry and Chemical Biology at McMaster University, where he leads the ChemAI-Lab, a research group focused on the integration of artificial intelligence with quantum chemistry and materials science.
He earned his Ph.D. from the University of British Columbia, pioneering the use of machine learning algorithms to accelerate simulations of complex physical systems. He holds a bachelor's degree from the National Autonomous University of Mexico.
Dr. Vargas-Hernández completed postdoctoral fellowships under the mentorship of Professor P. Brumer and Professor A. Aspuru-Guzik at the University of Toronto/Vector Institute. His interdisciplinary research bridges quantum chemistry, machine learning, and materials discovery.
Dedicated to advancing scientific knowledge and education, he is passionate about inspiring the next generation of scientists. Outside the lab, he enjoys cycling and exploring the world of coffee.
Website
Chem.AI Lab
Watch on YouTube (coming soon)
Monthly Amii Fellow Seminar
Speaker
Dr. Blair Attard-Frost, Amii Fellow & Assistant Professor of Political Science, University of Alberta
Title
Collective Resistance in AI Governance Systems
Abstract
The governance of AI has become a strategic imperative for government and industry. However, communities and workers who are vulnerable to societal impacts of AI are often pushed to the margins of policymaking, regulatory development, and other AI governance initiatives. How do marginalized groups oppose AI governance when it fails to serve their interests?
This talk will address that question by drawing on sociological theory, empirical data from my research, and lessons learned from real-world cases of collective organizing against AI governance initiatives in Canada. I will show: (1) AI governance is a social system co-created by interdependent networks of actors, resources, logics, and power structures; (2) collective resistance emerges in AI governance systems as a response to resource constraints and structural injustices; (3) communities use a variety of organizational strategies to resist top-down AI governance systems led by industry and state power, while also building smaller-scale governance systems from the bottom up.
Presenter Bio
Blair Attard-Frost is an Amii Fellow and Assistant Professor in the University of Alberta's Department of Political Science. She completed her PhD in Information Studies, Master of Information (MI), and Honours Bachelor of Arts (HBA) at University of Toronto. Blair's research applies a trans feminist lens to address challenges of power, participation, and justice in the governance of artificial intelligence. Her recent research appears in academic journals such as Big Data & Society, Government Information Quarterly, and AI and Ethics. Her insights on AI governance are featured in Canadian and international outlets such as CBC News: The National, The Globe and Mail, The Walrus, BetaKit, and Tech Policy Press. In a bygone era, Blair worked on digital transformation and business development projects in the Government of Ontario and Toronto's AI startup community.
Website
Blair Attard-Frost
Watch on YouTube (coming soon)
Speaker
Dr. John D. Martin, Adjunct Professor of Computing Science at the University of Alberta and Research Fellow at OpenMind Research Institute, hosted by Dr. Michael Bowling
Title
Artifacts as Memory Beyond the Agent Boundary
Abstract
Natural agents respond competently to problems that require resources beyond their cognitive abilities, in part because they leverage their environment for additional support. While natural agents are enabled by environmental resources, artificial agents are ostensibly bound by their individual system resources. In reinforcement learning (RL), an agent's system resources are established at design-time, and computational supply is commonly assumed to remain fixed throughout operation. In this paper, we show that RL agents can exploit environment dynamics as a form of additional memory, in situ. Specifically, we show that when RL agents can observe spatial paths, the amount of memory required to learn a performant policy is reduced. Although prior work from philosophy and artificial intelligence has theorized about such effects, we provide what we believe to be the first empirical report showing that computational RL agents externalize memory. Interestingly, this effect is experienced unintentionally and entirely through the agent's sensory stream.
Presenter Bio
John Martin is a Research Fellow at the Openmind Research Institute and an Adjunct Professor of Computing Science at the University of Alberta. John studies core topics in artificial intelligence with a focus on agentic phenomena and reinforcement learning. John was a Research Scientist at Intel Labs until 2024; he completed a post-doc at the University of Alberta in 2022, and he earned his PhD from Stevens Institute of Technology in 2021. During his studies, John spent time at Columbia University, Google Brain, and DeepMind. Prior to his graduate studies, John designed autonomous flight control systems for experimental helicopters at Sikorsky Aircraft.
Website
Watch on YouTube (coming soon)
NO SEMINARS - Reading Week
Monthly Amii Fellow Seminar
Speaker
Dr. Tian Tian, Amii Fellow and Assistant Professor, Department of Chemical and Materials Engineering, University of Alberta
Title
Multiscale Modeling of Material Interfaces: From Quantum Descriptors to Machine Learning
Abstract
Material interfaces govern the performance of many technologies, from electronic devices and catalysts to soft and biological materials. They are also where simulation becomes most difficult. Interfaces involve many possible configurations, long-range interactions, and processes that span multiple length and time scales. My research has moved across several material systems, from quantum-mechanical modeling of two-dimensional materials to more interdisciplinary problems, and this trajectory has repeatedly highlighted why interfaces often require modeling approaches beyond conventional simulations.
Many interface problems do have dominant physical quantities that can be identified and computed using physics-based methods. These quantities often capture the essential mechanisms at play. However, once configurational variability is introduced, the complexity increases rapidly. Small changes in structure or local environment can lead to a large number of relevant configurations, making exhaustive physical simulation impractical. Machine learning offers a possible way to handle this complexity, but applying it to interface problems is not straightforward. Interface-specific studies rarely generate the large, standardized datasets that many machine-learning methods rely on, and this limits the direct use of purely data-driven models.
In this talk, I will discuss how hybrid physics–ML approaches can help bridge this gap. Drawing on recent work, I will discuss data-efficient strategies for interface simulations, along with practical challenges related to workflow design, model behavior, and integration with existing simulation tools. The goal is to show how machine learning can be used as a supporting tool to extend physically grounded simulations, rather than replacing them, in the study of complex material interfaces.
Presenter Bio
Dr. Tian Tian obtained his B.Sc. and M.Sc. in Chemistry from Tsinghua University. He completed his Ph.D. in Chemical Engineering at ETH Zürich under the supervision of Prof. Chih-Jen Shih. His doctoral research focused on the multiscale simulation and engineering of the interfacial properties of two-dimensional materials. From 2021 to 2023, he received the Swiss National Science Foundation (SNSF) Postdoc Mobility Fellowship to conduct postdoctoral research at Carnegie Mellon University with Prof. Zachary W. Ulissi. He worked on machine-learning-assisted material simulations, particularly the fine-tuning of pretrained graph neural network models for computational catalysis and developing machine-learning-assisted computational workflows. Before joining UofA, he briefly held a postdoctoral position at Georgia Institute of Technology under the supervision of Prof. Phanish Suryanarayana and Prof. Andrew J. Medford, developing software communication layers for the machine-learning-enabled density functional theory (DFT) package. Dr. Tian’s research group develops machine learning–accelerated simulation methods for the design of interfacial materials. The group explores applications in two-dimensional materials, energy storage systems, light-emitting polymers, and colloidal soft matter, addressing the challenge of vast configurational spaces that govern interfacial behavior. His work combines physics-based modeling and data-driven learning to accelerate multiscale simulations and enable predictive materials design. In parallel, the group advances open-source computational tools and machine-learning frameworks that bridge computation and experiment for optimizing material properties and synthesis processes.
Watch on YouTube (coming soon)
Speaker
Shuai Liu, PhD student, University of Alberta, supervised by Dr. Csaba Szepesvári & Dr. Xiaoqi Tan
Title
Sample Complexity for Zero-Discounted MDP with Linear / Logistic Function Approximation and connections to RLHF
Abstract
I will discuss ideas behind algorithms that achieve nontrivial sample complexity guarantees for 0-discounted MDPs with linear/ logistic function approximation, a.k.a. stochastic contextual linear / logistic contextual bandits, including a deterministic UCB-like algorithm and a computationally efficient Thompson Sampling variant. Finally I’ll discuss if sigmoid function, which is widely used in RLHF, is a good choice for modelling human preference, in this special case.
Presenter Bio
Shuai Liu is a PhD student in the Computing Science department at University of Alberta, co-supervised by Dr. Csaba Szepesvári and Dr. Xiaoqi Tan. His current research interest lies in reinforcement learning theory (policy gradient methods), bandit algorithms and optimization. Before that, he obtained his MSc in Computing Science at University of Alberta under the supervision of Dr. Szepesvári and obtained a Bachelor's degree in Computer Science at the Harbin Institute of Technology.
Website
Watch on YouTube (coming soon)
Monthly Amii Fellow Seminar
Speaker
Dr. Quinn Lee, Amii Fellow and Assistant Professor in the Department of Psychology at the University of Alberta
Title
Neural representations of changing environments for navigation and memory in brains and machines
Abstract
The ability to learn and remember to navigate a changing world is essential for survival. To do so, our brains construct a representation of the environment from available senses to perceive where we are and where we are going. While animals rapidly learn and adapt to changing environments, this presents a significant challenge for modern AI. I will discuss how we can combine state-of-the art techniques to record activity in the brain and machine learning to build brain-inspired AI that efficiently learns and remembers in a changing world.
Presenter Bio
Dr. Quinn Lee is an Assistant Professor in the Department of Psychology (Faculty of Science) at the University of Alberta, a Fellow at the Alberta Machine Intelligence Institute (Amii), and CIFAR AI Research Chair. Dr. Lee leads the Navigation and Memory Systems (NMS) Lab, which combines cutting-edge methods for high-yield neural and behavioral recording to understand how we learn and remember in changing environments. To this end, his group leverages machine learning techniques both as tools for neuroscientific data analysis and theoretical modelling to advance our understanding of biological and artificial intelligence. Previously, Dr. Lee earned his Ph.D. in Neuroscience at the Canadian Centre for Behavioural Neuroscience at the University of Lethbridge with Drs. Robert J. Sutherland and Robert J. McDonald, and completed his postdoctoral research with Dr. Mark Brandon at McGill University.
Watch on YouTube (coming soon)
Speaker
Alex Ayoub, PhD student at the University of Alberta, supervised by Dr. Csaba Szepesvári & Dr. Dale Schuurmans
Title
Learning to Reason Efficiently with Discounted Reinforcement Learning
Abstract
Large reasoning models (LRMs) often consume excessive tokens, inflating computational cost and latency. We challenge the assumption that longer responses improve accuracy. By penalizing reasoning tokens using a discounted reinforcement learning setup (interpretable as a small token cost) and analyzing Blackwell optimality in restricted policy classes, we encourage concise yet accurate reasoning. Experiments confirm our theoretical results that this approach shortens chains of thought while preserving accuracy.
Presenter Bio
Alex Ayoub is a PhD student at the University of Alberta working on pre and post training large language models to solve reasoning problems like mathematics.
Watch on YouTube (coming soon)
Speaker
Dr. Tony Yousefnezhad, Senior Data Scientist at National Bank of Canada, hosted by Dr. Russ Greiner
Title
Orthogonal Contrastive Learning for Multi-Representation fMRI Analysis
Abstract
Functional MRI offers a powerful window into human cognition, yet challenges such as low signal-to-noise ratio, high dimensionality, and limited sample sizes remain major barriers—especially when integrating data across subjects or imaging sites. In this talk, we will introduce Orthogonal Contrastive Learning (OCL), a unified framework for aligning and analyzing multi-subject fMRI data without requiring temporal synchronization or equal time-series lengths.
OCL leverages two identical encoder networks: an online network trained with a contrastive objective that brings same-stimulus responses closer while separating different ones, and a target network that tracks the online model through an exponential moving average for stable learning. Each layer integrates QR decomposition for orthogonal feature extraction, locality-sensitive hashing (LSH) for compact subject-specific signatures, positional encoding for temporal-spatial fusion, and a transformer encoder for generating discriminative neural embeddings. I will also discuss OCL’s unsupervised pretraining on synthetic fMRI-like data and its transfer learning workflow for multi-site applications.
Presenter Bio
Dr. Tony Yousefnezhad is a Senior Data Scientist in the Department of Information Management at the National Bank of Canada, with cross-continental experience spanning Eurasia, East Asia, and North America. In addition to his industry role, he actively contributes to academic research and open-source innovation through his self-founded company, Learning By Machine. His research is at the forefront of pioneering advancements in machine learning, with a focus on deep learning, natural language processing (NLP), and reinforcement learning (RL) methodologies. These developments are designed to analyze a wide range of data modalities, including time series, text, images, audio, and wearable signals.
Website
Watch on YouTube (coming soon)