Past Seminars: 2021

All available AI Seminar Recordings are posted on the Amii Youtube Channel.
They can also be accessed by clicking on the individual presentation titles below.

Abstract: OneCup AI uses computer vision to uniquely identify and monitor livestock in rural locations. The development of this technology includes solving problems that don’t occur in other AI & ML industries as we also need to consider lack of connectivity, outdoor installation and the unpredictability of livestock. This traditional sector is slow to adopt new technologies due to a lack of knowledge as to how these proposed systems work and what ROI they can provide. As well, new technology can often be cost-prohibitive to a producer operating on slim margins. OneCup’s AI system BETSY, or Bovine Expert Tracking and Surveillance not only meets the easy to use requirements of a rancher but does so with state of the art machine and deep learning. We bring to the table a novel use of AI in a traditional industry such as livestock management. Developing the system to utilize all the new technology available to us, we leverage that and listen to producers to determine the best future functionality of BETSY.

Presenter Bio: Geoffrey has a distinguished career leveraging cutting edge technologies with several startup exits. Accolades include Entrepreneur of the Year and Associate Founder, Singularity University. He holds a Master's degree in Computer Science from GeorgiaTech, specializing in Artificial Intelligence and Deep Learning. A lifelong learner actively seeking opportunities to deepen his knowledge in a range of subjects.

*co-hosted w/ Technology Alberta*

Abstract: Retina disease affects over 500 million people worldwide. New methods of treatment as well diagnostics promises to dramatically improve patient outcomes and clinical efficiency over the coming years. A key component to some of these approaches rely on the ability to precisely identify regions of pathology in retina images. A major challenge with disease detection in retina images is the complexity and abundance of disease visual features. Hand annotating these diseases is therefor very time-consuming and laborious. Combined with the fact that retina specialists are some of the busiest and highest paid medical professionals, hand-annotating many thousands of images becomes impractically expensive. To address this, we have developed a weakly-supervised segmentation approach in which we are able to train a model to precisely identify microaneurysms in diabetic retinopathy patients without the need for any hand annotation. We have applied this technique to a database of ~300,000 fluorescein angiography images and are currently beta testing our disease detection software at a retina clinic in Edmonton.

Presenter Bio: Chris Ceroici completed his PhD in biomedical engineering at the University of Alberta in 2019 and his MSc in electrical engineering at the University of Waterloo in 2014. He is currently the lead machine learning engineer at PulseMedica where he is focused on developing machine-learning based systems for diagnosing and treating retina disease.

Nov 19, 2021:
Efficient and targeted COVID-19 border testing via reinforcement learning -
Hamsa Bastani,
University of Pennsylvania

Abstract: Throughout the COVID-19 pandemic, countries relied on a variety of ad-hoc border control protocols to allow for non-essential travel while safeguarding public health: from quarantining all travellers to restricting entry from select nations based on population-level epidemiological metrics such as cases, deaths or testing positivity rates. Here we report the design and performance of a reinforcement learning system, nicknamed ‘Eva’. In the summer of 2020, Eva was deployed across all Greek borders to limit the influx of asymptomatic travellers infected with SARS-CoV-2, and to inform border policies through real-time estimates of COVID-19 prevalence. In contrast to country-wide protocols, Eva allocated Greece’s limited testing resources based upon incoming travellers’ demographic information and testing results from previous travellers. By comparing Eva’s performance against modelled counterfactual scenarios, we show that Eva identified 1.85 times as many asymptomatic, infected travellers as random surveillance testing, with up to 2-4 times as many during peak travel, and 1.25-1.45 times as many asymptomatic, infected travellers as testing policies that only utilize epidemiological metrics. We demonstrate that this latter benefit arises, at least partially, because population-level epidemiological metrics had limited predictive value for the actual prevalence of SARS-CoV-2 among asymptomatic travellers and exhibited strong country-specific idiosyncrasies in the summer of 2020. Our results raise serious concerns on the effectiveness of country-agnostic internationally proposed border control policies that are based on population-level epidemiological metrics. Instead, our work represents a successful example of the potential of reinforcement learning and real-time data for safeguarding public health.

Presenter Bio: Hamsa Bastani is an Assistant Professor of Operations, Information, and Decisions at the Wharton School, University of Pennsylvania. Her research focuses on developing novel machine learning algorithms for data-driven decision-making, with applications to healthcare operations, social good, and revenue management. She designs methods for sequential decision-making, transfer learning and human-in-the-loop analytics. Her applied work uses large-scale, novel data sources to inform policy around impactful societal problems. Her work has received several recognitions, including the Wagner Prize for Excellence in Practice (2021), the Pierskalla Award for the best paper in healthcare (2016, 2019, 2021), the Behavioral OM Best Paper Award (2021), as well as first place in the George Nicholson and MSOM student paper competitions (2016).

Nov 12, 2021:
On the Optimality of Batch Policy Optimization Algorithms -
Chenjun Xiao, University of Alberta

Recording Coming Soon!

Abstract: Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment. Although interest in this problem has grown significantly in recent years, its theoretical foundations remain under-developed. To advance the understanding of this problem, we provide three results that characterize the limits and possibilities of batch policy optimization in the finite-armed stochastic bandit setting. First, we introduce a class of confidence-adjusted index algorithms that unifies optimistic and pessimistic principles in a common framework, which enables a general analysis. For this family, we show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral. Our analysis reveals that instance-dependent optimality, commonly used to establish optimality of on-line stochastic bandit algorithms, cannot be achieved by any algorithm in the batch setting. In particular, for any algorithm that performs optimally in some environment, there exists another environment where the same algorithm suffers arbitrarily larger regret. Therefore, to establish a framework for distinguishing algorithms, we introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction. We demonstrate how this criterion can be used to justify commonly used pessimistic principles for batch policy optimization.

Presenter Bio: Chenjun Xiao is a PhD student advised by Martin Mueller and Dale Schuurmans at University of Alberta. His main research interests are reinforcement learning, including developing sample efficient planning algorithms, and understanding the theoretical foundations of batch reinforcement learning. He also spent time as a research intern at Borealis AI and Huawei. He is now a student researcher at Google brain.

Abstract: Pathology is the study of the causes and effects of disease. A pathologist is a medical professional who evaluates patient samples to determine whether or not they have a particular disease. In many serious diseases such as cancer, a pathologist’s evaluation is considered the gold standard or ground truth and has a profound impact on a patient’s life. Unfortunately, various studies in the community have found that discordance amongst pathologists is common leading to errors in diagnoses which can lead to worse patient outcomes. At PathAI, we have demonstrated the ability of using artificial intelligence to aid pathologists in making better diagnoses and as well in the discovery of new biomarkers that can be used to predict response to potential life saving treatments. In this talk, I will focus on a couple of areas that we have worked on, including cancer and liver disease.

Presenter Bio: Aditya completed his PhD in machine learning and computer vision at MIT. He completed his MS at Stanford in 2011 and BS at Caltech in 2009. In his research he developed new methods for an array of applications in computer vision, including eye-tracking, prediction of image memorability, and visualization of deep networks. He is the recipient of a Facebook Fellowship and his work has been widely covered by various media outlets including BBC, The New York Times and The Washington Post. He has published over 30 papers in the fields of deep learning, computer vision and neuroscience.

Oct 29, 2021:
simKAP: Simulation framework for the kidney allocation process: A shared decision making model

-
Jean Yang, University of Sydney

Please contact Sabina if you would like a copy of the recording

Abstract: Organ shortage is a major barrier in transplantation and rules guarding organ allocation decisions should be robust, transparent, ethical and fair. Whilst numerous allocation strategies have been proposed, it is often unrealistic to evaluate all of them in real-life settings. Hence, the capability of conducting simulations prior to deployment is important. Here, we developed a kidney allocation simulation framework (simKAP) that models the various shared decision making processes by patients and clinicians. Our findings indicate that the complete model with dynamic waiting-list modelling and shared decision making on organ acceptance shows the best agreement between actual and simulated data in almost all scenarios. Additionally, we demonstrate the flexibility and capacity of simKAP to deliver a quality assessment framework for allocation design by comparing hypothetical risk-based allocation strategies. The importance of simKAP lies in its ability for policymakers in any transplant community to evaluate any proposed allocation algorithm using in-silico simulation. We will discuss the selected graft prediction model that contributes to the allocation evaluation framework.

Presenter Bio: Jean is a Professor of Statistics and Data Science at the University of Sydney. She is also the Theme Leader of the Integrative system and modelling at Charles Perkins Center. Her research stands at the interface between applied statistics and biology with focuses on the application of statistics to high dimensional problems in biomedical research. She was awarded the 2015 Moran Medal in statistics from the Australian Academy of Science in recognition of her work on developing methods for molecular data arising in cutting edge biomedical research. As a statistician who works in the bioinformatics area, she enjoys research in a collaborative environment, working closely with scientific investigators from diverse backgrounds.

Oct 22, 2021:
Starting up from Academia
-
Jacob Jaremko, University of Alberta & MEDO.ai

Abstract: MEDO.ai is one of Edmonton's many startup success stories, offering an AI platform that makes obtaining and interpreting ultrasound images faster, more reliable and accessible to all. However, many are unaware of the academic origins of the technology – let alone how Medo transitioned from academic advancement into a commercial success. In this special Edmonton Startup Week edition of the AI Seminar, join Amii Fellow and Canada CIFAR AI Chair Jacob Jaremko, also a clinician-scientist and co-founder of MEDO.ai, as he walks us through the company's origins and the steps the team took to launch what has grown into a key member of the Edmonton startup community.

Presenter Bio: Fellow and Canada CIFAR AI Chair at Amii, Jacob Jaremko is a clinician-scientist, currently an Associate Professor in the Faculty of Medicine at the University of Alberta, a practicing Pediatric and Musculoskeletal radiologist and partner at Medical Imaging Consultants, and co-founder of MEDO.ai. His research has focused on the influence of anatomy & childhood development of joints on the development of adult osteoarthritis. Seeking to develop objective imaging biomarkers of disease, he has generated 3D ultrasound tools for assessment of infant hip dysplasia, and semi-quantitative MRI scoring systems for arthritis.

Medical images are becoming easier to acquire than to interpret reliably. Dr. Jaremko is increasingly focused on automating medical image analysis, particularly in ultrasound, using artificial intelligence. Tools from natural image and video processing can be adapted to medical image analysis, with special attention to problems such as small training data sets and extreme class imbalance. A handheld ultrasound probe with AI image interpretation could ultimately be used by clinicians at any point of care -- becoming the 21st-century stethoscope!

Abstract: I will discuss recent research towards autonomous systems that learn reward functions and policies from human demonstrations and preferences. One problem that arises when learning from human input is that there is often a large amount of uncertainty over the human’s true intent and the corresponding desired robot behavior. To address this problem, I will discuss research along three fronts: (1) how to enable a robot to maintain efficient and accurate representations of its uncertainty, (2) how a robot can use this representation of uncertainty to generate risk-averse solutions, and (3) how a robot can actively query for additional human feedback to reduce its uncertainty over the human’s intent and improve the robustness of its learned policy.

Presenter Bio: Daniel is a postdoc at UC Berkeley, advised by Anca Dragan and Ken Goldberg. His research interests include robot learning, reward inference, AI safety, and multi-agent systems. In 2021, Daniel was selected as a Robotics: Science and Systems Pioneer. He received his Ph.D. in computer science from the University of Texas at Austin, where he worked with Scott Niekum on safe imitation learning. Prior to starting his PhD, Daniel worked for the Air Force Research Lab's Information Directorate where he studied bio-inspired swarms and multi-agent planning.

Abstract: Digital Advertising is growing at about $60B per year globally and not showing any signs of slowing down. LoKnow Inc. is right in the middle of this industry and working on some of the most interesting opportunities, many of them with AI and ML possibilities. Gain some insight from the executive team and the technology team during an intro to the challenges in this industry and LoKnow’s vision of having everyone benefit from, and love the vast world of digital advertising.

Presenter Bios: Tyler Hanson wishfully describes himself as a “senior millennial,” but he is unavoidably the office dad. Whether he’s dispensing wisdom or busting guts with his subtle wit, he’s always happy to lend his guidance to the LoKnow team. A “Knower” since day one, he served on LoKnow’s Board of Directors since the company’s founding before stepping into the role of President in 2020. An mechanical engineer and proud alumnus of the University of Alberta, Tyler is no stranger to the boardroom. He likes to give back to his alma mater by volunteering his time serving on both the U of A Senate and as the President of the Alumni Association.

Nolan McMahon is our resident machine learning specialist, but he himself is a learning machine. Graduating from the University of Calgary with an honours degree in physics, Nolan discovered his passion for machine learning when he got to test the efficacy of quantum computers. He spends a lot of his time studying and reading university-level textbooks on physics and electrical engineering. When he really wants to relax, Nolan plays computer games that tickle his strategy-loving nature. An avid learner, Nolan thinks of his university experience as learning how to learn, and is fascinated by teaching. He particularly looks up to physicist Richard Feynman and his easy-to-follow approach to teaching difficult concepts. Along with his interest in electrical engineering, he would someday like to delve deeper into those two disciplines.

Sept 24, 2021:
Self-Adaptive Visual Learning
-
Yang Wang, University of Manitoba

Abstract: There have been significant advances in computer vision in the past few years. Despite the success, current computer vision systems are still hard to use or deploy in many real-world scenarios. In particular, current computer vision systems usually learn a generic model. But in real world applications, a single generic model is often not powerful enough to handle the diverse scenarios. In this talk, I will introduce some of our recent work on self-adaptive visual learning. Instead of learning and deploying one generic model, our goal is to learn a model that can effectively adapt itself to different environments during testing. I will present applications from several computer vision applications, such as crowd counting, anomaly detection, personalized highlight detection, etc.

Presenter Bio: Yang Wang is an associate professor in the Department of Computer Science, University of Manitoba. He is currently on leave and working as the Chief Scientist in Computer Vision, Noah's Ark Lab, Huawei Technologies Canada. He did his PhD from Simon Fraser University, MSc from University of Alberta, and BEng from Harbin Institute of Technology. Before joining UManitoba, he worked as a NSERC postdoc at the University of Illinois at Urbana-Champaign. His research focuses on computer vision and machine learning. He received the 2017 Falconer Emerging Researcher Rh Award in applied science at the University of Manitoba. He currently holds the inaugural Faculty of Science research chair in fundamental science at UManitoba.

September 17, 2021:
Normalizing Flows in Theory and Practice
-
Marcus Brubaker, York University

Abstract: Normalizing Flows are a class of generative model which have been growing in popularity recently. NFs are a highly expressive form of generative model which allow for efficient sampling, exact likelihood computation and are quickly becoming competitive with GANs. This talk will provide a brief review of NFs and present a selection of results from my group on both theoretical aspects of flows and their practical application. In particular, I will: show theoretical results around the tail behavior of typical NF architectures; show an application of NFs to modeling realistic image noise in camera systems; show how NFs can be successfully applied to high resolution signals; and show how NFs can be used to build more expressive but analytically tractable stochastic processes.

Presenter Bio: Marcus Brubaker is an Assistant Professor at York University, a Faculty Affiliate of the Vector Institute and an Adjunct Professor at the University of Toronto. From 2018-2020 he worked as Research Director of Borealis AI, a machine learning research lab founded by the Royal Bank of Canada. He is an Associate Editor for IET Computer Vision and regularly serves as an area chair for machine learning and computer vision conferences including ECCV 2018, WACV 2019, UAI 2019, AAAI 2021 and CVPR 2021. Previously he worked on problems in electron cryomicroscopy (cryo-EM), human motion estimation, autonomous driving and Monte Carlo methods. His current research is focused on generative models and he continues to explore methods in cryo-EM.

September 3, 2021:
AI – An Industrial Roller Coaster Ride
- Shanny Lu & Hongbiao Tao,
DevFacto

*co-hosted w/ Technology Alberta*

Abstract: In this seminar, we will introduce our Data & Analytics practice at DevFacto Technologies. We will share experience and stories surrounding the three pillars in this practice, namely Data Engineering, Data Analytics, and Data Management, with a focus on what it takes to bring an AI project from vision to life. Finally, we will show a demo on building an end-to-end machine learning workflow using the Microsoft Azure machine learning and Azure DevOps.

Presenter Bios: Shanny Lu is a Unit Manager and an experienced Data & Analytics Consultant in DevFacto. She gained her Master's degree in Computing Science at the University of Alberta. She has gained 15+ years of industrial and academic experience in the domains of data engineering, full-stack software development and Artificial Intelligence study.

Hongbiao Tao is a Senior Analytics Consultant in DevFacto. He received his Ph.D. degree in the Department of Chemical and Materials Engineering from the University of Alberta. He has a background in computational modelling and simulation, and has experience in successfully deploying machine learning models to production.

August 27, 2021:
Learning Natural Sparse Representations by Fuzzy Tiling Activation
- Yangchen Pan,
University of Alberta

Abstract: Recent work has shown that sparse representations -- where only a small percentage of units are active -- can significantly reduce interference. Those works, however, relied on relatively complex regularization or meta-learning approaches, that have only been used offline in a pre-training phase. In this work, we pursue a direction that achieves sparsity by design, rather than by learning. Specifically, we design an activation function that produces sparse representations deterministically by construction, and so is more amenable to online training. The idea relies on the simple approach of binning, but overcomes the two key limitations of binning: zero gradients for the flat regions almost everywhere, and lost precision -- reduced discrimination -- due to coarse aggregation. We introduce a Fuzzy Tiling Activation (FTA) that provides non-negligible gradients and produces overlap between bins that improves discrimination. We first show that FTA is robust under covariate shift in a synthetic online supervised learning problem, where we can vary the level of correlation and drift. Then we move to the deep reinforcement learning setting and investigate both value-based and policy gradient algorithms that use neural networks with FTAs, in classic discrete control and Mujoco continuous control environments. We show that algorithms equipped with FTAs are able to learn a stable policy faster without needing target networks on most domains.

Presenter Bio: Yangchen is a PhD candidate at the University of Alberta, working with Martha White and Amir-massoud Farahmand (University of Toronto). He is broadly interested in ML, RL, deep learning. More info about Yangchen can be found on his website: https://yannickycpan.github.io/yangchenpan/

Abstract: Multi-voxel pattern analysis (MVPA) learns predictive models from task-based functional magnetic resonance imaging (fMRI) data, for distinguishing when subjects are performing different cognitive tasks — e.g., watching movies or making decisions. MVPA works best with a well-designed feature set and an adequate sample size. However, most fMRI datasets are noisy, high-dimensional, expensive to collect, and with small sample sizes. Further, training a robust, generalized predictive model that can analyze homogeneous cognitive tasks provided by multi-site fMRI datasets has additional challenges. In this presentation, we introduce the Shared Space Transfer Learning (SSTL) as a novel transfer learning (TL) approach that can functionally align homogeneous multi-site fMRI datasets, and so improve the prediction performance in every site. SSTL first extracts a set of common features for all subjects in each site. It then uses TL to map these site-specific features to a site-independent shared space in order to improve the performance of the MVPA. SSTL uses a scalable optimization procedure that works effectively for high-dimensional fMRI datasets. The optimization procedure extracts the common features for each site by using a single-iteration algorithm and maps these site-specific common features to the site-independent shared space. We evaluate the effectiveness of SSTL for transferring between various cognitive tasks. Our comprehensive experiments validate that SSTL achieves superior performance to other state-of-the-art analysis techniques. For more information, please visit the following link: https://papers.nips.cc/paper/2020/hash/b837305e43f7e535a1506fc263eee3ed-Abstract.html

Presenter Bio: Tony M. Yousefnezhad has been a Postdoctoral Fellow at the Department of Computing Science and the Department of Psychiatry at the University of Alberta since 2019. He completed his Ph.D. under the supervision of Prof. Daoqiang Zhang at the Department of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics (NUAA) in 2018. His Ph.D. research was fully funded by China Scholarship Council (CSC). Tony is currently involved with several mental health projects, primarily in collaboration with Professor Russell Greiner and Professor Andrew Greenshaw. His primary research interests lie in developing machine/deep learning for solving real-world big and complex problems. Specifically, he is now working on the intersection of machine learning and computational neuroscience, where he is creating different techniques for decoding patterns of the human brain by exploiting distinctive biomarkers — i.e., fMRI, EEG, MEG, Health Records, etc. Tony is also the founder and the leading developer of two open-source projects, including easy fMRI (https://easyfmri.learningbymachine.com/), and easy Data (https://easydata.learningbymachine.com/). He has published several related papers in top conferences or journals, such as NeurIPS, AAAI, SDM, ICDM, ICONIP, IEEE Transactions on Cybernetics, IEEE Transactions on Cognitive and Developmental Systems, Nature Scientific Reports, etc. His publications and related projects are available at https://www.yousefnezhad.com/.

*co-hosted w/ Technology Alberta*

Abstract: If AI or ML is the steak in your sandwich, then data wrangling and data visualization are the bread that allows you to eat it. Together, these enabling skills make your models consumable. In this talk, we will propose a structured approach to data wrangling and look at different data visualization techniques to ensure accurate, accessible insights.

Presenter Bio: Daniel Haight is the President and co-founder of Darkhorse Analytics. He is a Certified Analytics Professional and an award-winning lecturer at the University of Alberta School of Business. His work has spanned healthcare, energy, marketing, professional sports, and transportation. He started his career at Mercer Management Consulting in Toronto advising senior management and jet-setting around the continent. Subsequently, he nearly made millions of dollars in a small Internet startup. Instead, he enjoyed the magnificent failure of the dotcom bust. Along the way, he started a used car dealership, purchased a second-hand trampoline for fifteen dollars, recorded a rock video, and fathered three children. His current work focuses on predictive analytics and data visualization. His goal is to help managers make better decisions by combining their experience with the power of analytics. His even bigger goal is to design a company where Monday mornings are even more exciting than Friday afternoons.

Abstract: Hindsight rationality is an approach to playing general-sum games that prescribes no-regret learning dynamics for individual agents with respect to a set of deviations, and further describes jointly rational behavior among multiple agents with mediated equilibria. To develop hindsight rational learning in sequential decision-making settings, we formalize behavioral deviations as a general class of deviations that respect the structure of extensive-form games. Integrating the idea of time selection into counterfactual regret minimization (CFR), we introduce the extensive-form regret minimization (EFR) algorithm that achieves hindsight rationality for any given set of behavioral deviations with computation that scales closely with the complexity of the set. We identify behavioral deviation subsets, the partial sequence deviation types, that subsume previously studied types and lead to efficient EFR instances in games with moderate lengths. In addition, we present a thorough empirical analysis of EFR instantiated with different deviation types in benchmark games, where we find that stronger types typically induce better performance.

Presenter Bio: Dustin is a Ph.D. candidate at the University of Alberta and the Alberta Machine Intelligence Institute (Amii) working with Professor Michael Bowling. He works on multi-agent learning and scaleable, dependable learning algorithms. He is a coauthor of [DeepStack](https://www.deepstack.ai) and he created [Cepheus's public match interface](http://poker-play.srv.ualberta.ca/). He completed a B.Sc. and M.Sc. in computing science at the University of Alberta where his M.Sc was also supervised by Michael Bowling. As an undergraduate, he worked with the Computer Poker Research Group (CPRG) to create an [open-source web interface to play against poker bots](https://github.com/dmorrill10/acpc_poker_gui_client) and to develop the 1st-place 3-player Kuhn poker entry in the 2014 Annual Computer Poker Competition (ACPC).

July 23, 2021:
Policy and Heuristic-Guided Tree Search Algorithms
-
Levi Lelis, University of Alberta

Abstract: Heuristic search algorithms such as A* use a heuristic function to guide its search by focusing on states that are estimated to be closer to a goal state. In this talk we will explore the use of a policy, i.e., a probability distribution over actions, for solving single-agent deterministic problems. I will start by describing Levin tree search, an algorithm that uses a policy and offers guarantees on the number of nodes it needs to expand to solve state-space search problems. We will then discuss Policy-Guided Heuristic Search (PHS), a search algorithm that uses both a policy and a heuristic function to guide its search. PHS also offers guarantees on the number of nodes it needs to expand to solve search problems. I will then present empirical results showing the advantages of policy-guided tree search algorithms, especially when it is difficult to learn effective heuristic functions to guide the search.

Presenter Bio: Levi is an Assistant Professor in the Department of Computing Science at the University of Alberta and a Professor on leave from the Universidade Federal de Viçosa in Brazil. He received his Ph.D. in Computing Science in 2013 from the University of Alberta, studying under the supervision of Robert Holte (Amii Fellow and founding researcher) and Sandra Zilles. Levi has co-authored more than 45 refereed papers at venues such as the International Joint Conference on Artificial Intelligence (IJCAI), the conference for the Association for the Advancement of Artificial Intelligence (AAAI) and the Neural Information Processing Systems (NeurIPS) Conference. He has also served as a Senior Program Committee Member for IJCAI (where he received the honour of being the Distinguished Program Committee Member in 2018 and 2019) and AAAI and Program Committee Member for the Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE).

July 16, 2021:
Probabilistic Labels for classification tasks in medical images
- Roberto Vega Romero,
University of Alberta

Abstract: Deep learning approaches often require huge datasets to achieve good generalization. This complicates its use in tasks like image-based medical diagnosis, where the small training datasets are usually insufficient to learn appropriate data representations. To compensate for the scarcity in data, we propose to provide more information per training instance in the form of probabilistic labels, which encode medical expert knowledge. We observe gains of up to 22% in the accuracy of models trained with these labels, as compared with traditional approaches, in three classification tasks: diagnosis of hip dysplasia, fatty liver, and glaucoma. The outputs of models trained with probabilistic labels are calibrated, allowing the interpretation of its predictions as proper probabilities. We anticipate this approach will apply to other tasks where few training instances are available and expert knowledge can be encoded as probabilities.

Presenter Bio: Roberto Vega is a PhD candidate at the University of Alberta working under the supervision of Russ Greiner. His research focuses on how to combine machine learning with medical expert knowledge to learn accurate predictive models. He also collaborates with the local startup MEDO.ai, where he works alongside their AI team to automatically analyze ultrasound images for the early detection of hip dysplasia and thyroid nodules.

*co-hosted w/ Technology Alberta*

Abstract: Work has changed for good. We’re collaborating digitally more than ever. This brings new challenges for leaders to successfully create productive, positive and inclusive workplaces. But while digital communities and online collaboration platforms present new challenges, they also present unique opportunities to build culture in ways that didn’t exist before. Machine learning can power an automated culture building program by measuring sentiment, natural language and microaggressions on an anonymized, aggregated level and provide leaders, researchers and advocates with critical and actionable insights about their digital communities and team cultures.

Presenter Bio: Lana Cuthbertson is the CEO and Founder of Areto Labs, a B2B SaaS social enterprise startup that builds technology to make digital communities more positive and inclusive. Her career has taken her from journalism to communications and marketing to product and program management, allowing her to build expertise in technology, digital communications, leadership, innovation, change and knowledge management and writing. She has worked in the technology, financial services, pharmaceuticals, post-secondary and nonprofit industries. She is a passionate advocate for gender equality. She created an A.I. based Twitter bot to encourage women in politics called @ParityBOT, which she ran during elections in Canada, New Zealand and the U.S. and published results at NeurIPS 2019 at the AI for Social Good workshop. She co-founded and chaired ParityYEG in 2018, and chaired the Alberta North chapter of Equal Voice from 2015 to 2017. She was named a Top 40 Under 40 by Avenue Edmonton in 2018.

Abstract: The main theme that drives my research is bringing big data and machine learning to bear on largescale problems in science, engineering, medicine and public policy. I have spent the last 25 years developing novel prediction algorithms and modeling techniques for a range of decision-making problems in medicine, engineering, natural sciences, and social sciences with support from the NSF, ONR, NIH, DHS, DARPA, and the City of Houston. My recent projects are in the areas of medical decision making, forecasting weather and disease, analysis of social media data to understand public support for issues, detecting and analyzing the influence of bots on social media, neuroscience of human learning, and assessments of hurricane wind and rainfall flooding risks. In this talk, I will present an overview of four of my projects and summarize key lessons I learned from formulating and deploying machine learning solutions.

Presenter Bio: Devika Subramanian obtained her B.Tech. in Computer Science and Engineering from IIT Kharagpur, and her M.S. and Ph.D. in Computer Science from Stanford University. She is a full professor of Computer Science and Electrical Engineering at Rice University, where she has served on the faculty since 1996. She has given invited lectures on her work at many conferences and universities in the world. She has also won teaching awards at Stanford University, Cornell University and at Rice University.

Abstract: RUNWITHIT started in 2014, using AI-based modelling to address system complexity and risk, building synthetic realities around global digital systems to explore unprecedented futures. Work included modelling around variety of communications, digital, AI, geospatial and IoT systems, and a modelling platform emerged capable of modelling tens of millions of diverse entities to explore emergent behaviour. This has lead to RWI’s current role in supplying rapid Single Synthetic Environments based on an extensive, collaborative catalogue for multiple sectors. One example is energy. As threats to energy grids increase, connecting measures to ensure grid readiness, energy security, and resilience becomes critical. The additional pressures of electrification, decentralization, climate change, and cyber attacks, demand more adaptive scenario planning, mitigating technology and education. Applications of artificial intelligence-based modelling can make these complex futures more accessible to all stakeholders, such as a Single Synthetic Environment, a modelling approach that creates a living digital twin. These accurate geospatial environments include hyper-localized models of the people and businesses, the infrastructure, technology and policies, and then enable any future scenario to play forward. The models don’t rely solely on historical data or precedent; they produce the data required to compare and optimize different options based on quantified impacts and outcomes. This data can range from calculating psychosocial responses in customers such as equity, engagement, adoption, trust, and willingness to pay to measuring demand-side shifts created by new patterns of life, technology adoption, and climate shifts. For High Impact Low Frequency events, Single Synthetic Environments provide insight into the impact of investment choices, expose cascading vulnerabilities, and furnish opportunities to innovate resilience measures. Tomorrow is not like yesterday, and there are incredible opportunities to improve the forward radar of energy transition, mobility and resilience assisted by scenario-based artificial intelligence.

Presenter Bio: Myrna Bittner is the CEO and Co-Founder of RWI Synthetics (2014), an AI-based modelling company that creates Single Synthetic Environments (SSE) for disrupted sectors. SSE's are live, geospatially accurate cities or regions, complete with their people, activity, policies, infrastructure used to calculate the impacts of all kinds of opportunities and risks, existing and anticipated, under any conceivable scenarios. The technology of RWI originates from a previous venture of Myrna's in the ‘90s, NeuralVR, a neural net and 3D visualization research company focused on organizing, aggregating, and visualizing unstructured documents. In 2019, RWI created its first SSE of a city for the Keynote at the IoT World Expo in Silicon Valley. In 2020, Myrna and her team were a part of the Incubatenergy® Labs Challenge, creating a POC of a dual-disaster for EPRI and Phoenix-based utility, Salt River Projects. Building on this, in 2021, EPRI and RWI partnered on a project showcased at AFWERX, reimagining energy for the USAF with Synthetic Bases. RWI is also a finalist in Toyota Mobility Foundation’s City Architecture of Tomorrow Challenge, reimagining the future of mobility in Kuala Lumpur. RWI is a women-led, Certified Aboriginal Business, passionate about diversity and representation in their team, process, and technology, working with clients to ensure that intersectionality, diversity, equity, and inclusion are incorporated into the design of better futures.

May 28, 2021:
AI Health Coach
- Bruce Matichuk,
Health Gauge

Abstract: GPT-3 has become a popular tool for language processing. This talk will review GPT-3 and demonstrate some of its core capabilities and applications. One of the applications of GPT-3 is for implementing conversational systems. Health Gauge, is an Edmonton based company with a Health Information platform is using GPT-3 to build a product called “AI Health Coach”. Wearable data and health journaling information is collected on the platform. The talk will review our development efforts to incorporate GPT-3 into Health Coach to assist with monitoring and managing health related issues.

Presenter Bio: Bruce Matichuk is the Co-Founder and CTO of Health Gauge an Edmonton based company building a health information platform. Bruce has an MSc in Computing Science from the UofA and has worked as the CTO for several AI based startups in the region including Celcorp, Poynt, Clinitrust, and AiDANT. Bruce has published research in the area of AI and has filed several patents relating to the use of AI in industry. Bruce’s research focus is in intelligent agents including automated code generation, visual recognition and conversational system. Bruce also provides regular talks to industry on the emergence and use of AI.

Abstract: Edge computing is a promising paradigm that brings servers closer to users, leading to lower latencies and enabling latency-sensitive applications such as cloud gaming, virtual/augmented reality, telepresence, and telecollaboration. Due to the high number of possible edge servers and incoming user requests, the optimum choice of user-server matching has become a difficult challenge, especially in the 5G era where the network can offer very low latencies. In this paper, we introduce the problem of fair server selection as not only complying with an application’s latency threshold but also reducing the variance of the latency among users in the same session. Due to the dynamic and rapidly evolving nature of such an environment and the capacity limitation of the servers, we propose as solution a Reinforcement Learning (RL) method in the form of a Quadruple Q-Learning model with action suppression, Qvalue normalization, and a reward function that minimizes the variance of the latency. Our evaluations in the context of a cloud gaming application show that, compared to a existing methods, our proposed method not only better meets the application’s latency threshold but is also more fair with a reduction of up to 35% in the standard deviation of the latencies experienced by users.

Presenter Bio: Alaa Eddin Alchalabi is an AI enthusiast, futurist, an educator, an artist, and an emerging AI scholar with flair for research and passion for teaching. Alaa is especially passionate about the recent trends in applied AI research and Multimedia Systems, and has been involved in collaborative projects on an international caliber such as projects funded by the European Consortium (Erasmus+), the Turkish TUBITAK agency, the Canadian research council (NSERC), Vector Institute, and the US based the National Science Foundation's (NSF). The AI/Multimedia projects he worked on range from Multi-authentication systems, AI-powered games, Brain-controlled applications, and edge cloud architectures. Alaa is currently pursuing his PhD in Electrical and Computer Engineering from the University of Ottawa, and holds the prestigious Ontario Trillium Award. Alaa is currently working in Swarmio Media as a research scientist.

Abstract: Mean-field theory provides an effective way of scaling multi-agent reinforcement learning algorithms to environments with many agents by abstracting other agents into a virtual mean agent. This talk will give a brief introduction to the recent research field of mean-field reinforcement learning, which is a very practical technique that can be applicable to a set of real-world problems. Yet, there are some limiting assumptions of previous approaches in this area that prevent the wide applicability of these methods. First, all agents in the environment must be homogeneous. Second, the mean-field metric must be fully observable. I will discuss these assumptions and explain some of our research work that relaxes each of these assumptions. Specifically, I will introduce a multi-type mean-field reinforcement learning approach that will relax the first assumption and a partially observable mean-field reinforcement learning approach that will relax the second assumption. Further, I will also provide practical algorithms for both these methods that have similar theoretical guarantees to previous algorithms in this area in addition to stronger empirical performance compared to baselines on a set of large games with many agents.

Presenter Bio: Sriram is a Ph.D. student in the Department of Electrical and Computer Engineering at the University of Waterloo. He is also a postgraduate affiliate at the Vector Institute, Toronto. His primary research interest is in the area of multi-agent systems. Particularly he is interested in the issues of scale, non-stationarity, communication, and sample complexity in multi-agent learning systems. His research is motivated by the field of computational sustainability. His long-term research vision is to make multi-agent learning algorithms applicable to a variety of large-scale real-world problems. Before starting the Ph.D. program, he obtained a Master's in Electrical and Computer Engineering at the University of Waterloo, Canada in 2018 and a Bachelor's in Geomatics from Anna University, India in 2016. He was the recipient of prestigious fellowships such as the MITACS Globalink Research award, MITACS Graduate Fellowship, Pasupalak fellowship in AI, and Vector postgraduate research award. He has also worked as a research intern in Borealis AI - Edmonton and Waterloo labs.

May 7, 2021:
Doing Some Good with Machine Learning
- Lester Mackey, Microsoft Research

*this presentation was pre-recorded; the recording is linked above*

Abstract: This is the story of my assorted attempts to do some good with machine learning. Through its telling, I’ll highlight several models of organizing social good efforts, describe half a dozen social good problems that would benefit from our community's attention, and present both resources and challenges for those looking to do some good with ML.

Presenter Bio: Lester Mackey is a Principal Researcher at Microsoft Research, where he develops machine learning methods, models, and theory for large-scale learning tasks driven by applications from healthcare, climate forecasting, and the social good. Lester moved to Microsoft from Stanford University, where he was an assistant professor of Statistics and (by courtesy) of Computer Science. He earned his PhD in Computer Science and MA in Statistics from UC Berkeley and his BSE in Computer Science from Princeton University. He co-organized the second place team in the Netflix Prize competition for collaborative filtering, won the Prize4Life ALS disease progression prediction challenge, won prizes for temperature and precipitation forecasting in the yearlong real-time Subseasonal Climate Forecast Rodeo, and received best paper and best student paper awards from the ACM Conference on Programming Language Design and Implementation and the International Conference on Machine Learning.

April 30, 2021:
Few-Shot Learning: Are We Making Progress?
-
Ismail Ben Ayed, Ecole de Technologie Superieure (ETS)

Abstract: Despite their unprecedented performances when trained on large-scale labeled data, deep-learning models are seriously challenged when dealing with novel (unseen) classes and limited labeled instances. In contrast, humans can learn new tasks easily from a handful of examples, by leveraging prior experience and context. Few-shot learning attempts to bridge this gap, and has recently triggered substantial research efforts. This talk discusses recent developments in the general, wide-interest subject of learning with limited labels. Specifically, I will discuss state-of-the-art models, which leverage unlabeled data with structural priors, and connect them under a unifying information-theoretic perspective. Furthermore, I will highlight recent results, which point to important limitations of the standard few-shot benchmarks, and question the progress made by an abundant recent few-shot literature, mostly based on complex meta-learning strategies. Classical and simple losses, such as the Shannon entropy or Laplacian regularization, well-established in clustering and semi-supervised learning, achieve outstanding performances.

Presenter Bio: Ismail Ben Ayed is currently a Full Professor at ETS Montreal. He is also affiliated with the CRCHUM. His interests are in computer vision, optimization, machine learning and medical image analysis algorithms. Ismail authored over 100 fully peer-reviewed papers, mostly published in the top venues of those areas, along with 2 books and 7 US patents. In the recent years, he gave over 30 invited talks, including 4 tutorials at flagship conferences (MICCAI’14, ISBI’16, MICCAI’19 and MICCAI’20). His research has been covered in several visible media outlets, such as Radio Canada (CBC), Quebec Science Magazine and Canal du Savoir. His research team received several recent distinctions, such as the MIDL’19 best paper runner-up award and several top-ranking positions in internationally visible contests. Ismail served as Program Committee for MICCAI’15, MICCAI’17 and MICCAI’19, and as Program Chair for MIDL’20. Also, he serves regularly as reviewer for the main scientific journals of the field, and was selected several times among the top reviewers of the main conferences in vision/learning (such as CVPR’15 and NeurIPS’20).

Abstract: Vertebral fracture diagnosis is difficult even for experts radiologists, but is integral to the care of patients with osteoporosis. Features are often subtle leading to underdiagnosis but may also be mimicked by normal variants leading to overdiagnosis, a "perfect storm." We discuss our experience with convolutional neural networks (CNNs) for automating vertebral fracture recognition from low-resolution DXA images. Recently, we have applied transfer learning and active learning to address different scan modes and generalization to other cohorts.

Presenter Bio: Dr. Leslie is Professor of Medicine and Radiology at the University of Manitoba with 500 peer-reviewed publications. His research interests are in fracture risk assessment, osteoporosis testing and other nuclear diagnostic techniques. Barret Monchka is a data analyst experienced in working with complex health data. Barret has a Bachelor of Computer Science and is currently completing an MSc in Community Health Sciences at the University of Manitoba.

Abstract: With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. I focus on the problems of public health and conservation, and address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. I will present results from work around the globe in using AI for HIV prevention, Maternal and Child care interventions, TB prevention and COVID modeling, as well as for wildlife conservation. Achieving social impact in these domains often requires methodological advances. To that end, I will highlight key research advances in multiagent reasoning and learning, in particular in, computational game theory, restless bandits and influence maximization in social networks.In pushing this research agenda, our ultimate goal is to facilitate local communities and non-profits to directly benefit from advances in AI tools and techniques.

Presenter Bio: Milind Tambe is Gordon McKay Professor of Computer Science and Director of Center for Research in Computation and Society at Harvard University; concurrently, he is also Director "AI for Social Good" at Google Research India. He is a recipient of the IJCAI John McCarthy Award, ACM/SIGAI Autonomous Agents Research Award from AAMAS, AAAI Robert S Engelmore Memorial Lecture award, INFORMS Wagner prize, Rist Prize of the Military Operations Research Society, Columbus Fellowship Foundation Homeland security award, over 25 best papers or honorable mentions at conferences such as AAMAS, AAAI, IJCAI and meritorious commendations from agencies such as the US Coast Guard and the Los Angeles Airport. Prof. Tambe is a fellow of AAAI and ACM.

Abstract: As the use of machine learning in safety-critical domains becomes widespread, the importance of proactively addressing sources of failure and evaluating model reliability has increased. Achieving this, however, can be difficult because the performance and reliability of ML models are vulnerable to being overly dependent on the "context" (i.e., artifacts specific to the training dataset) on which the model was trained. In this talk, we will overview work in this area over the last five years and describe in more detail two example state-of-the-art approaches tackling challenges in building safe and reliable AI. The first describes causally-inspired learning algorithms which allow model developers to specify potentially problematic changes in context and then learn models which are guaranteed to be stable to these shifts. The second tackles monitoring for safety: to be able to evaluate the stability of a model to changes in setting or population, typically requires applying the model to a large number of independent datasets. Since the cost of collecting such datasets is often prohibitive, we will describe a distributionally robust framework for evaluating this type of robustness using the fixed, available evaluation data. This talk will be jointly presented by Prof. Suchi Saria and Adarsh Subbaswamy.

Presenter Bio: Dr. Suchi Saria is the John C. Malone Associate professor of computer science and statistics at the Whiting School of Engineering and of health policy at the Bloomberg School of Public Health. She is also the founding Research Director of the Malone Center for Engineering in Healthcare at Hopkins and the founder of Bayesian Health, which leverages state-of-the-art machine learning and behavior change expertise to unlock improved patient care outcomes at scale by providing real-time precise, patient-specific, and actionable insights at the point of care. Recently, Dr. Saria won a grant from the FDA and is collaborating with them in the development of frameworks for the evaluation of safety and reliability of AI. She was named by IEEE Intelligent Systems as Artificial Intelligence’s “10 to Watch” (2015), MIT Technology Review’s ‘35 Innovators under 35’ (2017), World Economic Forum’s Young Global Leader (2018), DARPA Young Faculty Awardee (2016) and a Sloan Research Fellow (2018). She was invited to join the National Academy of Engineering’s Frontiers of Engineering (2017) and the National Academy of Medicine’s Emerging Leaders in Health and Medicine (2018). She has given over 250 invited talks and is on the editorial board of the Journal of Machine Learning Research.

Abstract: Sample-based planning is a powerful family of algorithms for generating intelligent behavior from a model of the environment. Generating good candidate actions is key to the success of sample-based planners, especially in large action spaces. Candidate actions usually i) exhaust the entire search space, ii) are handcrafted using domain knowledge or, iii) are produced using a learned policy. We show how we can explicitly learn a candidate action generator by optimizing a novel objective, marginal utility. The marginal utility measures the increase in utility of an action over previous actions. We validate our candidate action generator in domains with both continuous and discrete action spaces. We show how planning with an explicit candidate generator outperforms planners instantiated with handcrafted actions, trained stochastic policies, and other natural objectives for generating actions.

Presenter Bio: Zaheen is a PhD student at the University of Alberta working with Michael Bowling and Levi Lelis. His research interests lie in artificial intelligence with a stronger focus on search, learning and games. His current work is exploring how search and learning methods can be effectively combined to produce stronger agents for adversarial domains. Zaheen also received his MSc at the University of Alberta working with Robert Holte and he is the lead graduate student member of the Computer Curling Research Group.

Abstract: A successful approach to playing two-player, zero-sum games has been to deploy a static artifact resembling a Nash equilibrium, which has led work in artificial intelligence to focus on computing such artifacts. This approach is less sound and has been less successful in multi-player, general-sum games. We suggest instead to field learning algorithms that ensure strong performance in hindsight relative to "deviations", i.e., pre-defined behavior modifications. A society of such "hindsight rational" agents converges toward mediated equilibrium, a traditional notion of equilibrium based on average correlated play rather than factored behavior, in contrast to Nash equilibrium. We re-examine deviation types and mediated equilibria in extensive-form games to gain a more complete understanding and resolve past misconceptions. We introduce a new deviation type that has implicitly formed the basis for the counterfactual regret minimization (CFR) algorithm. We show hints of performance improvements that can be gained by only changing CFR's target deviation set in both zero-sum and non-zero-sum games.

Presenter Bio: Dustin is a Ph.D. candidate at the University of Alberta and the Alberta Machine Intelligence Institute (Amii) working with Professor Michael Bowling. He works on multi-agent learning and scaleable, dependable learning algorithms. He is a coauthor of [DeepStack](https://www.deepstack.ai) and he created [Cepheus's public match interface](http://poker-play.srv.ualberta.ca/). He completed a B.Sc. and M.Sc. in computing science at the University of Alberta where his M.Sc was also supervised by Michael Bowling. As an undergraduate, he worked with the Computer Poker Research Group (CPRG) to create an [open-source web interface to play against poker bots](https://github.com/dmorrill10/acpc_poker_gui_client) and to develop the 1st-place 3-player Kuhn poker entry in the 2014 Annual Computer Poker Competition (ACPC).

March 19, 2021:
Adversarial Partial Label Learning – Learning with Overcomplete Noisy Labels
-
Yuhong Guo, Carleton University

No recording available for this presentation

Abstract: Standard supervised learning requires a sufficient amount of precisely labeled training data, which is either expensive or impractical to obtain in real world scenarios. The ability to train prediction models given weak supervision can greatly increase the applicability of supervised learning. In this talk, I will introduce one specific type of learning under weak supervision — partial label learning — where true labels are mixed with irrelevant noisy labels on the training data, forming an overcomplete candidate label set for each instance. To address this problem, I will present an adversarial partial multi-label learning model, PML-GAN, which unifies label disambiguation and prediction model induction under a generalized encoder-decoder framework. PML-GAN achieves the state-of-the-art partial multi-label learning performance. Moreover, I also extend the study to model non-random label noise, present the first multi-level adversarial generation model for partial label learning and demonstrate its superior performance on commonly used partial label learning datasets.

Presenter Bio: Dr. Yuhong Guo is a Professor in the School of Computer Science at Carleton University, a Canada Research Chair in Machine Learning, and a Canada CIFAR AI Chair at Amii. She received her PhD in Computing Science from the University of Alberta, and has previously worked at the Australian National University and Temple University. Her research interests include machine learning, computer vision, and natural language processing. She has published over eighty papers in the top venues of these areas, and has received paper awards from both IJCAI and AAAI. She is an Associate Editor of the IEEE Transactions on Pattern Analysis and Machine Intelligence and is in the Editorial Board of the Artificial Intelligence Journal. She has served in the Senior Program Committees for AAAI, IJCAI and ACML, and has served on the program committees for many other international conferences, including NeurIPS, ICML, ICLR, UAI, ACL, CVPR and ICCV.

Abstract: The vast majority of computer science literature in privacy can be broadly divided into two categories--differential, where the idea is to ensure that participation of an entity or an individual does not change the outcome significantly and inferential, where we are trying to bound the inferences an adversary can make based on auxiliary information.

In this talk, I will present two new case-studies, one in each framework. The first looks at a form of inferential privacy that allows more fine-grained control in a local setting than the individual level. The second looks at privacy against adversaries who have bounded learning capacity, and has ties to the theory of generative adversarial networks.

Joint work with Joseph Geumlek, Jacob Imola and Ashwin Macchanavajhhala.

Presenter Bio: Kamalika Chaudhuri received a Bachelor of Technology degree in Computer Science and Engineering in 2002 from Indian Institute of Technology, Kanpur, and a PhD in Computer Science from University of California at Berkeley in 2007. After a postdoc at the Information Theory and Applications Center at UC San Diego, she joined the CSE department at UC San Diego as an assistant professor. She received an NSF CAREER Award in 2013 and a Hellman Faculty Fellowship in 2012.

Kamalika's research interests are in the foundations of trustworthy machine learning, which includes problems such as learning from sensitive data while preserving privacy, learning under sampling bias, and in the presence of an adversary. She is particularly interested in privacy-preserving machine learning, which addresses how to learn good models and predictors from sensitive data, while preserving the privacy of individuals.

February 26, 2021:
Autonomous Novelty-Based Exploration of Mars
-
Kiri Wagstaff, NASA; Oregon State U

Abstract: Current Mars surface exploration is primarily prescripted on a day-by-day basis. Mars rovers have a limited ability to autonomously select targets for follow-up study that match pre-defined target signatures. However, when exploring new environments, we are also interested in observations that differ from what previously has been seen. I will describe how Mars rovers can use novelty measures to help select observation targets with the goal of accelerating discovery. In addition, I will discuss the constraints imposed by operating 100 million miles from Earth with limited computational resources.

Presenter Bio: Dr. Kiri L. Wagstaff is a Principal Researcher in machine learning at NASA's Jet Propulsion Laboratory and an associate research professor at Oregon State University. Her research focuses on developing new machine learning methods for use onboard spacecraft and in data archives for planetary science, astronomy, cosmology, and more. She earned a Ph.D. in Computer Science from Cornell University followed by an M.S. in Geological Sciences and a Master of Library and Information Science (MLIS). She received the Lew Allen Award for Excellence in Research and two NASA Exceptional Technology Achievement Medals, and she is a Senior Member of the Association for the Advancement of Artificial Intelligence. She is passionate about keeping machine learning relevant to real-world problems.

February 19, 2021:
Hardness of MDP planning with linear function approximation
-
Csaba Szepesvári, University of Alberta; DeepMind

Abstract: Markov decision processes (MDPs) is a minimalist framework that is designed to capture the most important aspects of decision making under uncertainty, a problem of major practical interest and thoroughly studies in reinforcement learning. The unfortunate price of the minimalist approach is that MDPs lack structure and as such planning and learning in MDPs with combinatorial-sized state and action spaces is strongly intractable: Bellman's curse of dimensionality is here to stay in the worst-case. However, apparently already Bellman and his co-workers realized as early as in the 1960s that for many problem of practical interest, the optimal value function of an MDP is well approximated by just using a few basis functions that are standardly used in numerical calculations. As knowing the optimal value function is essentially equivalent to knowing how to act optimally, one hopes that there will be some algorithms that can efficiently compute the few approximating coefficients. If this is possible, we can think of the algorithm as computing the value function in a compressed space. However, until recently not much has been known about whether these compressed computations are possible and when. In this talk, I will discuss a few recent results (some positive, some negative) that are concerned with these compressed computations and conclude with some open problems.

Presenter Bio: Csaba Szepesvári is a Canada CIFAR AI Chair, the team-lead for the “Foundations” team at DeepMind and a Professor of Computing Science at the University of Alberta. He earned his PhD in 1999 from Jozsef Attila University, in Szeged, Hungary. In addition to regularly publishing at top tier journals and conferences, he has (co-)authored three books. Currently, he serves as the action editor of the Journal of Machine Learning Research and as an associate editor of the Mathematics of Operations Research journal, in addition to serving regularly on program committees of various machine learning and AI conferences. Dr. Szepesvári's main interest is developing principled, learning-based approaches to artificial intelligence (AI). He is the co-inventor of UCT, an influential Monte-Carlo tree search algorithm, a variant of which was used in the AlphaGo program which, in a landmark game, defeated the top Go professional Lee Sedol in 2016, ten years of the invention of UCT. In 2020, Dr. Szepesvári co-founded the weekly “Reinforcement Learning Theory virtual seminar series”, which showcases top theoretical work in the area of reinforcement learning with speakers and which is open to attendees from all over the world.

Abstract:
Background: COVID-19 outbreaks in racialized communities have exacerbated disparities in poverty and illness. The Illuminate Project generated sensemaking data to understand the evolving impacts of COVID-19.

Methods: Participatory action research involving community partners in Edmonton, AB. September-December 2020, twenty cultural health brokers and natural community leaders collected narratives real-time in the SenseMaker® platform. The compositional data patterns were quantitatively described, the narratives analyzed qualitatively and visualized with social network analysis.

Results: 764 narratives from diverse communities illuminate the entangled and evolving nature of COVID-19 impacts on destabilizing families: prevention and management, care of non-COVID acute, chronic and serious illness; maternal care; mental health and triggers of past trauma; financial insecurity; impact on children and youth, and seniors; and legal concerns. Social Network Analysis visualizes the entanglement of the impacts. A core obstacle for COVID-19 holistic management was income instability, a key asset was community social capital. Compositional data showed isolation of community members and the need for cultural brokering facilitating access to formal health and social system supports . Findings inform recommendations to inform policy and community action.

Interpretation: The Illuminate Project has made visible the entangled issues with systemic roots that result in poor health in vulnerable members of ethnocultural communities, and the impact of COVID-19 on increasing basic needs and the time and effort needed to mitigate them. We illustrate cultural brokering as a practice to support people through this crisis and we propose concrete recommendations to inform policy to reduce harm, and support community resiliency.

Presenter Bio: Dr. Campbell-Scherer is a professor in the Department of Family Medicine and a family physician with clinical and research interests in evidence-based clinical practice and implementation science. She completed her residency in family medicine at McMaster University, then worked as a rural family physician prior to spending five years on faculty at the University of Michigan. She joined the Faculty of Medicine & Dentistry at the University of Alberta in 2009 and became the associate dean of the Office of Lifelong Learning and Physician Learning Program in 2017. She also leads an interdisciplinary research team – The 5As Team Program – which focuses on improving primary care for people living with obesity.

February 5, 2021:
AI in action - The WeStretch Story
- Karen Willsey,
Kasa

*co-hosted w/ Technology Alberta*

Abstract: In 2016, Karen rolled out of bed with a groan. Realizing that the aging process was only going to get stronger, she set out to find a solution for her aches and pains. Crossfit left her sore for days and yoga caused her to pull more muscles more than stretch them. She decided to embrace her passion for technology and develop her own stretching app, one that would methodically get your body moving without the pain of traditional exercise. Working alongside a local physiotherapist, Karen figured out how to move every joint in every direction, providing lubrication to connective tissue and improving overall blood circulation. After two and a half years of trial and error, WeStretch was released to the Apple Store and Google Play. Since WeStretch is a stretching app with an animated virtual trainer, it was designed using Unity software alongside various algorithms to generate unique and customized stretching routines using artificial intelligence. In the time it has been on the market, WeStretch has been able to release updates that methodically work on targeting specific problem areas of the body, helping athletes improve their sport with customized warmups and cool downs, and it has routines that will help anyone improve their overall functional strength without equipment or going to the gym. (To see why functional strength is important, try going from standing to sitting to lying down three times in a row!) As a fun addition, WeStretch has added the “Alberta Tour” which has Ada, your virtual trainer, stretching in front of some of the iconic landmarks of Alberta, such as the Three Sisters Mountain or Pinto McBean. Ultimately, Karen’s lessons with app creation are to just get started, fail fast then fail faster, and don’t be scared to embrace change.

Presenter Bio: Karen Willsey is a U of A alumni, graduating with a Bachelor of Science, specializing in comp-sci in 1994. She was also a competitive gymnast on the U of A team, passing the Elite Canada exam and competing across Canada and the United States for the university. Karen has been married to her husband, Kevin, for 27 years and they have six children together, four of which have studied at the U of A, and two under the age of 15. Growing up with an entrepreneurial background, Karen went from managing convenience stores to designing custom software for her father’s trucking company, to managing and owning residential and commercial real estate. Designing apps is her new passion and WeStretch is just her starting point.

January 29, 2021:
Machine Learning for Medical Imaging
-
Nils Forkert, University of Calgary

No recording available for this presentation

Abstract: In this talk, I will present selected applications of machine learning methods for medical image analysis for computer-aided diagnosis support and clinical research, which includes the use of conventional machine learning techniques as well as novel deep learning methods with a primary application to cerebrovascular and neurological diseases. Furthermore, novel research projects related to patient privacy protection and explainable artificial intelligence will be presented.

Presenter Bio: Dr. Nils Daniel Forkert is an Associate Professor at the University of Calgary in the Departments of Radiology and Clinical Neurosciences. He received his German diploma in computer science in 2009 from the University of Hamburg, his master’s degree in medical physics in 2012 from the Technical University of Kaiserslautern, his PhD in computer science in 2013 from the University of Hamburg, and completed a postdoctoral fellowship at Stanford University before joining the University of Calgary as an Assistant Professor in 2014. He is an imaging and machine learning scientist who develops new image processing methods, predictive algorithms, and software tools for the analysis of medical data. This includes the extraction of clinically relevant parameters and biomarkers from medical data describing the morphology and function of organs with the aim of supporting clinical studies and preclinical research as well as developing computer-aided diagnosis and patient-specific, precision-medicine, prediction models using machine learning based on multi-modal medical data. Dr. Forkert is a Canada Research Chair (Tier 2) in Medical Image Analysis, and Director of the Child Health Data Science Program of the Alberta Children's Hospital Research Institute as well as the Theme Lead for Machine Learning in Neuroscience of the Hotchkiss Brain Institute at the University of Calgary. He has published over 120 peer-reviewed manuscripts, over 50 full-length proceedings papers, over 130 conference abstracts, and 1 book. He has received major funding from the Natural Sciences and Engineering Research Council, the Heart and Stroke Foundation, Calgary Foundation, and the National Institutes of Health as a PI or co-PI. He currently supervises four postdoctoral fellows, six PhD students, and five MSc students demonstrating his dedication to training the next generation of data science researchers.

Abstract: Efficiently navigating a superpressure balloon in the stratosphere requires the integration of a multitude of cues, such as wind speed and solar elevation, and the process is complicated by forecast errors and sparse wind measurements. Coupled with the need to make decisions in real time, these factors rule out the use of conventional control techniques. This talk describes the use of reinforcement learning to create a high-performing flight controller for Loon superpressure balloons. Our algorithm uses data augmentation and a self-correcting design to overcome the key technical challenge of reinforcement learning from imperfect data, which has proved to be a major obstacle to its application to physical systems. We deployed our controller to station Loon balloons at multiple locations across the globe, including a 39-day controlled experiment over the Pacific Ocean. Analyses show that the controller outperforms Loon’s previous algorithm and is robust to the natural diversity in stratospheric winds. These results demonstrate that reinforcement learning is an effective solution to real-world autonomous control problems in which neither conventional methods nor human intervention suffice, offering clues about what may be needed to create artificially intelligent agents that continuously interact with real, dynamic environments.

Bio: Marlos C. Machado is a research scientist at DeepMind Alberta. His research interests lie broadly in artificial Intelligence and particularly focus on reinforcement learning. He received his B.Sc. and M.Sc. from Universidade Federal de Minas Gerais, in Brazil, and his Ph.D. from the University of Alberta, where he introduced the idea of temporally-extended exploration through options. He was a research scientist at Google Brain from 2019 to 2021, during which time he made major contributions to reinforcement learning, in particular the introduction of an operator view of policy gradient methods and the application of deep reinforcement learning to control Loon’s stratospheric balloons. Marlos C. Machado is also an adjunct professor at the University of Alberta.

Abstract: The Alberta Strategy for Patient Oriented Research (AbSPORU) Support Unit was established in 2015 using funding from CIHR and Alberta Innovates and is a partnership between the Universities of Alberta, Calgary, and Lethbridge as well as Alberta Health Services and other stakeholders in Alberta’s Research & Innovation ecosystem. AbSPORU created infrastructure to enable patient engagement in research, improved and expedited data access and data management, provides the research community with training opportunities and expert methodological, data management and statistical support, and is working to enhance the integration of research into clinical practice and policy making through our investments in knowledge translation and implementation science. In our first 5 years, AbSPORU built capacity, capabilities, and partnerships for patient-oriented research (POR) in Alberta and worked to increase the quantity and quality of POR. In our next 5 years, AbSPORU will build on these partnerships and expertise to support Alberta’s integrated Learning Health System with the goal of achieving the quadruple aim: improving patient/provider experience, improving health outcomes for patients and the broader population, and improving health care efficiency/cost-effectiveness. In this lecture Dr. McAlister will review AbSPORU with a focus on those elements most relevant to the Alberta Machine Intelligence Institute.

Bio: Dr. McAlister is a general internist and has attended on the general medicine Clinical Teaching Units and the Heart Failure Clinic at the University of Alberta Hospital and the Mazankowski Alberta Heart Institute since 1994. He obtained his MD (1990) and completed his general internal medicine residency at the University ofAlberta (1994). He completed his MSc in Epidemiology from the University of Ottawa (1998) and did post-doctoral training in clinical epidemiology at the Centre for Evidence-Based Medicine, Oxford University. Dr. McAlister’s research has included randomized trials, prospective cohort studies, systematic reviews, and health services research. He has published over 460 peer-reviewed manuscripts (h index 105), has received the Royal College of Physicians and Surgeons of Canada Gold Medal for Research(2005), the Canadian Society of Internal Medicine David Sackett Senior Investigator Award (2013), a Killam Professorship (2015), was the Canadian Royal College Osler Lecturer (2018), and is a Fellow of the Canadian Academy of Health Sciences. He has served as President of the Canadian Society of Internal Medicine (2009-2011), co-chair of the Canadian Hypertension Education Program Guideline Committee (2003-2006) and Outcomes Research Task Forces (2008-2014), Chair of the CIHR Health Services,Evaluation and Interventions Peer Review Committee (2007-2009), and Chair of the University Hospital Foundation Clinical Research Grant Competition (2011-2018).

January 8, 2021:
Optimal Control and Machine Learning in Robotics
- Mo Chen, Simon Fraser University

Abstract: Autonomous mobile robots are becoming pervasive in everyday life, and hybrid approaches that merge traditional control theory and modern data-driven methods are becoming increasingly important. In the first half of seminar, we begin with a discussion of safety verification methods, and their computational and practical challenges. In the second half, we examine connections between optimal control and reinforcement learning, and between optimal control and visual navigation.

Bio: Mo Chen is an Assistant Professor in the School of Computing Science at Simon Fraser University, Burnaby, BC, Canada, where he directs the Multi-Agent Robotic Systems Lab. He completed his PhD in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley with Claire Tomlin in 2017, and received his BASc in Engineering Physics from the University of British Columbia in 2011. From 2017 to 2018, Mo was a postdoctoral researcher in the Aeronautics and Astronautics Department at Stanford University with Marco Pavone. His research interests include multi-agent systems, safety-critical systems, reinforcement learning, and human-robot interactions.