NEW LOCATION for monthly Fellows seminars - Amii HQ 2nd floor event space at 10065 Jasper Ave (regular weekly seminars remain in UComm 2-108)
NO SEMINARS - Reading Week
Monthly Amii Fellow Seminar
Speaker
Dr. Tian Tian, Amii Fellow and Assistant Professor, Department of Chemical and Materials Engineering, University of Alberta
Title
Multiscale Modeling of Material Interfaces: From Quantum Descriptors to Machine Learning
Abstract
Material interfaces govern the performance of many technologies, from electronic devices and catalysts to soft and biological materials. They are also where simulation becomes most difficult. Interfaces involve many possible configurations, long-range interactions, and processes that span multiple length and time scales. My research has moved across several material systems, from quantum-mechanical modeling of two-dimensional materials to more interdisciplinary problems, and this trajectory has repeatedly highlighted why interfaces often require modeling approaches beyond conventional simulations.
Many interface problems do have dominant physical quantities that can be identified and computed using physics-based methods. These quantities often capture the essential mechanisms at play. However, once configurational variability is introduced, the complexity increases rapidly. Small changes in structure or local environment can lead to a large number of relevant configurations, making exhaustive physical simulation impractical. Machine learning offers a possible way to handle this complexity, but applying it to interface problems is not straightforward. Interface-specific studies rarely generate the large, standardized datasets that many machine-learning methods rely on, and this limits the direct use of purely data-driven models.
In this talk, I will discuss how hybrid physics–ML approaches can help bridge this gap. Drawing on recent work, I will discuss data-efficient strategies for interface simulations, along with practical challenges related to workflow design, model behavior, and integration with existing simulation tools. The goal is to show how machine learning can be used as a supporting tool to extend physically grounded simulations, rather than replacing them, in the study of complex material interfaces.
Presenter Bio
Dr. Tian Tian obtained his B.Sc. and M.Sc. in Chemistry from Tsinghua University. He completed his Ph.D. in Chemical Engineering at ETH Zürich under the supervision of Prof. Chih-Jen Shih. His doctoral research focused on the multiscale simulation and engineering of the interfacial properties of two-dimensional materials. From 2021 to 2023, he received the Swiss National Science Foundation (SNSF) Postdoc Mobility Fellowship to conduct postdoctoral research at Carnegie Mellon University with Prof. Zachary W. Ulissi. He worked on machine-learning-assisted material simulations, particularly the fine-tuning of pretrained graph neural network models for computational catalysis and developing machine-learning-assisted computational workflows. Before joining UofA, he briefly held a postdoctoral position at Georgia Institute of Technology under the supervision of Prof. Phanish Suryanarayana and Prof. Andrew J. Medford, developing software communication layers for the machine-learning-enabled density functional theory (DFT) package. Dr. Tian’s research group develops machine learning–accelerated simulation methods for the design of interfacial materials. The group explores applications in two-dimensional materials, energy storage systems, light-emitting polymers, and colloidal soft matter, addressing the challenge of vast configurational spaces that govern interfacial behavior. His work combines physics-based modeling and data-driven learning to accelerate multiscale simulations and enable predictive materials design. In parallel, the group advances open-source computational tools and machine-learning frameworks that bridge computation and experiment for optimizing material properties and synthesis processes.
Watch on YouTube (coming soon)
Speaker
Shuai Liu, PhD student, University of Alberta, supervised by Dr. Csaba Szepesvári & Dr. Xiaoqi Tan
Title
Sample Complexity for Zero-Discounted MDP with Linear / Logistic Function Approximation and connections to RLHF
Abstract
I will discuss ideas behind algorithms that achieve nontrivial sample complexity guarantees for 0-discounted MDPs with linear/ logistic function approximation, a.k.a. stochastic contextual linear / logistic contextual bandits, including a deterministic UCB-like algorithm and a computationally efficient Thompson Sampling variant. Finally I’ll discuss if sigmoid function, which is widely used in RLHF, is a good choice for modelling human preference, in this special case.
Presenter Bio
Shuai Liu is a PhD student in the Computing Science department at University of Alberta, co-supervised by Dr. Csaba Szepesvári and Dr. Xiaoqi Tan. His current research interest lies in reinforcement learning theory (policy gradient methods), bandit algorithms and optimization. Before that, he obtained his MSc in Computing Science at University of Alberta under the supervision of Dr. Szepesvári and obtained a Bachelor's degree in Computer Science at the Harbin Institute of Technology.
Website
Watch on YouTube (coming soon)
Monthly Amii Fellow Seminar
Speaker
Dr. Quinn Lee, Amii Fellow and Assistant Professor in the Department of Psychology at the University of Alberta
Title
Neural representations of changing environments for navigation and memory in brains and machines
Abstract
The ability to learn and remember to navigate a changing world is essential for survival. To do so, our brains construct a representation of the environment from available senses to perceive where we are and where we are going. While animals rapidly learn and adapt to changing environments, this presents a significant challenge for modern AI. I will discuss how we can combine state-of-the art techniques to record activity in the brain and machine learning to build brain-inspired AI that efficiently learns and remembers in a changing world.
Presenter Bio
Dr. Quinn Lee is an Assistant Professor in the Department of Psychology (Faculty of Science) at the University of Alberta, a Fellow at the Alberta Machine Intelligence Institute (Amii), and CIFAR AI Research Chair. Dr. Lee leads the Navigation and Memory Systems (NMS) Lab, which combines cutting-edge methods for high-yield neural and behavioral recording to understand how we learn and remember in changing environments. To this end, his group leverages machine learning techniques both as tools for neuroscientific data analysis and theoretical modelling to advance our understanding of biological and artificial intelligence. Previously, Dr. Lee earned his Ph.D. in Neuroscience at the Canadian Centre for Behavioural Neuroscience at the University of Lethbridge with Drs. Robert J. Sutherland and Robert J. McDonald, and completed his postdoctoral research with Dr. Mark Brandon at McGill University.
Watch on YouTube (coming soon)
Speaker
Alex Ayoub, PhD student at the University of Alberta, supervised by Dr. Csaba Szepesvári & Dr. Dale Schuurmans
Title
Learning to Reason Efficiently with Discounted Reinforcement Learning
Abstract
Large reasoning models (LRMs) often consume excessive tokens, inflating computational cost and latency. We challenge the assumption that longer responses improve accuracy. By penalizing reasoning tokens using a discounted reinforcement learning setup (interpretable as a small token cost) and analyzing Blackwell optimality in restricted policy classes, we encourage concise yet accurate reasoning. Experiments confirm our theoretical results that this approach shortens chains of thought while preserving accuracy.
Presenter Bio
Alex Ayoub is a PhD student at the University of Alberta working on pre and post training large language models to solve reasoning problems like mathematics.
Watch on YouTube (coming soon)
Speaker
Dr. Tony Yousefnezhad, Senior Data Scientist at National Bank of Canada, hosted by Dr. Russ Greiner
Title
Orthogonal Contrastive Learning for Multi-Representation fMRI Analysis
Abstract
Functional MRI offers a powerful window into human cognition, yet challenges such as low signal-to-noise ratio, high dimensionality, and limited sample sizes remain major barriers—especially when integrating data across subjects or imaging sites. In this talk, we will introduce Orthogonal Contrastive Learning (OCL), a unified framework for aligning and analyzing multi-subject fMRI data without requiring temporal synchronization or equal time-series lengths.
OCL leverages two identical encoder networks: an online network trained with a contrastive objective that brings same-stimulus responses closer while separating different ones, and a target network that tracks the online model through an exponential moving average for stable learning. Each layer integrates QR decomposition for orthogonal feature extraction, locality-sensitive hashing (LSH) for compact subject-specific signatures, positional encoding for temporal-spatial fusion, and a transformer encoder for generating discriminative neural embeddings. I will also discuss OCL’s unsupervised pretraining on synthetic fMRI-like data and its transfer learning workflow for multi-site applications.
Presenter Bio
Dr. Tony Yousefnezhad is a Senior Data Scientist in the Department of Information Management at the National Bank of Canada, with cross-continental experience spanning Eurasia, East Asia, and North America. In addition to his industry role, he actively contributes to academic research and open-source innovation through his self-founded company, Learning By Machine. His research is at the forefront of pioneering advancements in machine learning, with a focus on deep learning, natural language processing (NLP), and reinforcement learning (RL) methodologies. These developments are designed to analyze a wide range of data modalities, including time series, text, images, audio, and wearable signals.
Website
Watch on YouTube (coming soon)