Past Seminars: 2022

All available AI Seminar Recordings are posted on the Amii Youtube Channel.
They can also be accessed by clicking on the individual presentation titles below.

December 23, 2022:
Cognitively Inspired Natural Language Processing  
Ning Shi, University of Alberta

YouTube Video Coming Soon!

Abstract:
Many breakthroughs in one field are inspired by others. For example, neural networks are inspired by the structure and function of the human brain, and attention mechanisms borrow the idea from psychology. In this talk, we will present our recent findings in natural language processing inspired by the laws of human cognition, such as knowledge fusion, systematic generalization, and imitation learning. In the first one, we propose a plug-in framework called RoChBert to build a more robust language model explicitly for Chinese. By incorporating adversarial knowledge, we show how to fuse the necessary phonetic and glyph information into pre-trained representations to strengthen the robustness. In the next one, we investigate the extent to which neural networks can do the same as humans to generalize from old concepts to new ones systematically. We revisit this controversial topic from the perspective of meaningful learning, a concept from educational psychology. Our experimental results indicate that conventional sequence-to-sequence models can successfully one-shot generalize to novel concepts and compositions through shared semantic relationships, either inductively or deductively. In the last one, we reformulate text editing as an imitation game using behavioral cloning. Specifically, we convert standard sequence-to-sequence data into state-to-action demonstrations and propose to train an agent to mimic how humans revise texts iteratively, where the action space can be as flexible as needed. Overall, we hope this presentation will encourage and shed light on future studies at the junction of multiple fields. 


Presenter Bio:
Ning Shi is a 1st-year Ph.D. student working with Prof. Grzegorz Kondrak at the University of Alberta, associated with Alberta Machine Intelligence Institute (Amii). Previously, he was a senior algorithm engineer at Alibaba Group and a machine learning engineer at Learnable. Ning received an M.S. degree in Computer Science at the Georgia Institute of Technology, an M.S. degree in Applied Data Science at Syracuse University, an M.S. degree in Management and Systems at New York University, and a B.Mgt. degree in E-commerce at Donghua University. His current research interests primarily stand on computational linguistics and natural language processing at the junction of human cognition. 

December 16, 2022:
Reinforcement Learning for Supply Chains 
Dhruv Madeka, Amazon

YouTube Video Coming Soon!

Abstract:
We present a Deep Reinforcement Learning approach to solving a periodic review inventory control system with stochastic vendor lead times, lost sales, correlated demand, and price matching. While this dynamic program has historically been considered intractable, we show that several policy learning approaches are competitive with or outperform classical baseline approaches. In order to train these algorithms, we develop novel techniques to convert historical data into a simulator and present a collection of results that motivate this approach. We also present a model-based reinforcement learning procedure (Direct Backprop) to solve the dynamic periodic review inventory control problem by constructing a differentiable simulator. Under a variety of metrics Direct Backprop outperforms model-free RL and newsvendor baselines, in both simulations and real-world deployments. 


Presenter Bio:
Dhruv Madeka is a Principal Machine Learning Scientist at Amazon. His current research focuses on Natural Language Processing and applying Deep Reinforcement Learning to inventory management problems. Dhruv has also worked on developing generative and supervised deep learning models for probabilistic time series forecasting. In the past - Dhruv worked in the Quantitative Research team at Bloomberg LP, developing open source tools for the Jupyter Notebook and conducting advanced mathematical research in derivatives pricing, quantitative finance and election forecasting. 

December 9, 2022:
Conformalized Fairness via Quantile Regression
Linglong Kong, University of Alberta

https://www.youtube.com/watch?v=4_YmJrGsexw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=26

Abstract:
Algorithmic fairness has received increased attention in socially sensitive domains. While rich literature on mean fairness has been established, research on quantile fairness remains sparse but vital. To fulfill great needs and advocate the significance of quantile fairness, we propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity with respect to sensitive attributes, such as race or gender, and thereby derive a reliable fair prediction interval. Using optimal transport and functional synchronization techniques, we establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles. A hands-on pipeline is provided to incorporate flexible quantile regressions with an efficient fairness adjustment post-processing algorithm. We demonstrate the superior empirical performance of this approach on several benchmark datasets. 


Presenter Bio:
Dr. Linglong Kong is a professor in the Department of Mathematical and Statistical Sciences at the University of Alberta. He holds a Canada Research Chair in Statistical Learning, a Canada CIFAR AI Chair, and is a fellow of the Alberta Machine Intelligence Institute (AMII). His publication record includes more than 70 peer-reviewed articles in top journals such as AOS, JASA and JRSSB as well as top conferences such as NeurIPS, ICML, ICDM, AAAI, and IJCAI. Dr. Kong currently serves as associate editor of the Journal of the American Statistical Association, the Canadian Journal of Statistics, and Statistics and its Interface, as well as guest editor of Statistics and its Interface. Additionally, Dr. Kong is a member of the Executive Committee of the Western North American Region of the International Biometric Society, chair of the ASA Statistical Computing Session program, and chair of the webinar committee. He served as a guest editor of Canadian Journal of Statistics, associate editor of  International Journal of Imaging Systems and Technology, guest associate editor of Frontiers of Neurosciences, chair of the ASA Statistical Imaging Session, and member of the Statistics Society of Canada's Board of Directors. He is interested in the analysis of high-dimensional and neuroimaging data, statistical machine learning, robust statistics and quantile regression, as well as artificial intelligence for smart health. 

December 2, 2022:
Why do Machine Learning Projects Fail?   
Scott Greig, Willowglen Systems

*co-hosted w/ Technology Alberta*

https://www.youtube.com/watch?v=qrgjmlYPr5Y&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=28

Abstract:
Up to 85% of ML projects either fail to be deployed or do not live up their expected potential. In the critical infrastructure sector there are 3 key areas of expertise that are needed for success.  The first area is the knowledge and expertise in machine learning tools and techniques.  The other two, industrial domain knowledge and operational deployment expertise, are just as important in achieving operational success but are much less frequently considered.  This presentation will discuss the impact of industrial domain knowledge and operational deployment expertise on ML projects using lessons learned from actual ongoing and completed projects. 


Presenter Bio:
Scott is alumni of the University of Alberta, Computer Science and has worked over 30 years at companies responsible for bringing disruptive technologies to market.  These include being the first employee and Bioware where he lead the technology development for it's formative years and is currently the product owner of SentientQ, Willowglen System's next generation SCADA platform. 

November 25, 2022:
Investigating Action Encodings in Recurrent Neural Networks in Reinforcement Learning  
Matthew Schlegel , University of Alberta

https://www.youtube.com/watch?v=6yCqjuuj90g&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=25

Abstract:
Building and maintaining state to learn policies and value functions is critical for deploying reinforcement learning (RL) agents in the real world. Recurrent neural networks (RNNs) have become a key point of interest for the state-building problem, and several large-scale reinforcement learning agents incorporate recurrent networks. While RNNs have become a mainstay in many RL applications, many choices are often under-reported and contain critical implementation details to improve performance. In this talk, we discuss one axis on which RNN architectures can be (and have been) modified for use in RL. Specifically, we will investigate how action information incorporates into the state update function of a recurrent cell. While action as the focus presents as an intuitive choice, several lines of research in cognitive science highlight the importance of action in perception. We will discuss several architectural choices centered on action and empirically evaluate the resulting architectures on a set of illustrative domains. This empirical evaluation includes an analysis of the learned state in a prediction problem, behavioral experiments, and performance when observations take the form of images and agent-centric sensor readings. Finally, we will discuss future work in developing and analyzing recurrent cells and key challenges needing attention in the partially observable setting. 


Presenter Bio:
Matthew Schlegel  is a PhD candidate at University of Alberta in Edmonton, Alberta, Canada. I have a BS in Physics and an MS in Computer Science both from Indiana University Bloomington. My current PhD work is focused on reinforcement learning, and specifically in understanding how agents may perceive their world. I focus primarily on prediction making, but have been known to dabble in control from time-to-time. My active research interests include: predictions as a component in intelligence (both artificial and biological), off-policy prediction and policy evaluation, deep learning and resulting learned representations in the reinforcement learning context, and discovery or attention of important abstractions (described as predictions) through interaction. 

November 18, 2022:

How to build predictive knowledge agents: stories from evaluation and discovery 
Alex Kearney, University of Alberta

YouTube Video Coming Soon!

Abstract:
In computational reinforcement learning, a growing body of work seeks to construct an agent's knowledge of the world through predictions of future sensations. This area of work, often referred to as Predictive Knowledge, is distinctive for its epistemic stance. More than a collection of machine learning methods, Predictive Knowledge positions itself as a theory of machine knowledge. In this talk, we challenge Predictive Knowledge’s perspective on truth: that an agent’s predictions are true knowledge if they can be verified by comparing the estimated value to what is observed by the agent. We then argue that the use of prediction estimates in further decision making is key to understanding the construction of knowledge in such systems. To explore this, we introduce a meta-gradient descent process by which an agent learns 1) what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to inform a control learner that maximizes future reward. 


Presenter Bio:
Alex Kearney is a PhD candidate in the RLAI lab supervised by Patrick Pilarski and Rich Sutton. Her work is focused on how artificial intelligence agents can construct knowledge by deciding both what to learn and how to learn, with minimal designer instruction. She enjoys reading smelly old books & learning about interdisciplinary relationships between ideas. Alex recently took up skateboarding to spend more time outside, and she sometimes like to implement w3c social web specs for the indieweb.

November 11, 2022:
No Seminar

November 4, 2022:

A Dream of Cause and Fire: Musings on the Current and Future Uses of AI for Science
Mark Crowley, University of Waterloo

YouTube Video Coming Soon!

Abstract:
In this talk I'll give an overview of some of the research happening recently in my lab and then I will highlight a couple next directions which I am excited about, particularly in the use of Machine Learning in AI for Science research. 


Presenter Bio:
Mark Crowley is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Waterloo and is a member of the Waterloo Artificial Intelligence Institute.  He and his students carry out research into single-agent and multi-agent Reinforcement Learning large-scale 2D/3D image-like processing, and manifold learning/dimensionality reduction.  Some of this research is motivated by theoretical opportunities, particularly in manifold learning and multi-agent reinforcement learning.  But most of the work flows out of challenges raised by real-world domains including forest fire management, the automotive domain, medical imaging, and digital chemistry/material design. 

October 28, 2022:
A general differentially private learning framework for decentralized data  
Bei Jiang, University of Alberta

YouTube Video Coming Soon!

Abstract:
Decentralized consensus learning has been hugely successful, which minimizes a finite sum of expected objective functions over a network of nodes. However, the local communication across neighboring nodes in the network may lead to the leakage of private information. To address this challenge, we propose a general differentially private (DP) learning framework for decentralized data that applies to many non-smooth learning problems. We show that the proposed algorithm retains the performance guarantee in terms of stability, generalization, and finite sample performance. We investigate the impact of local privacy-preserving computation on the global DP guarantee. Further, we extend the discussion by adopting a new class of noise-adding DP mechanisms based on generalized Gaussian distributions to improve the utility-privacy trade-offs. Our numerical results demonstrate the effectiveness of our algorithm and its better performance over the state-of-the-art baseline methods in various decentralized settings. 


Presenter Bio:
Dr. Bei Jiang is an Associate Professor at the Department of Mathematical and Statistical Sciences of the University of Alberta and Fellow of the Alberta Machine Intelligence Institute (Amii). She received her PhD in Biostatistics in 2014 from University of Michigan. Prior to joining the University of Alberta in 2015 as an Assistant Professor, she was a postdoctoral researcher at the Department of Biostatistics at the Columbia University from 2014 to 2015. Her main research interests focus on statistical integration of multi-source and multi-modal data, and statistical disclosure control and learning methods for privacy protection. She has also worked closely with collaborators in women’s health, mental health, neurology, and industry partners to apply cutting-edge statistical learning methods to real-world applications. 

October 21, 2022:

Responsibility for AI – Current Developments in EU Law 
Georg Borges, Saarland University

YouTube Video Coming Soon!

Abstract:
On the 28th of September 2022, the EU Commission published its long awaited proposal for an AI Liability Directive, alongside with a proposal for a revised product liability law. The new AI Liability Directive is expected to complement the draft AI Act which had been proposed in February 2021 and, since then, has gained much attention even outside Europe. The presentation will give a very brief overview of the proposed Directives and then examine whether the AI Liability Directive will be able to meet the expectations (maybe not) and whether the underlying concepts are convincing (unfortunately, no). Interestingly, altogether, the proposed legislation would focus on the responsibility of the manufacturer of AI systems rather than that of the user. 


Presenter Bio:
Georg Borges is a Professor of Civil Law, Legal Informatics, German and International Business Law and Legal Theory and the managing director of the Institute for Legal Informatics at Saarland University, Germany. From 2004 to 2014, he was Professor of Law at Ruhr-University Bochum. Beside this, he was also a Judge at the State Court of Appeals, Hamm Circuit. As an expert on Business Law with a focus on IT Law, Prof. Borges authored several books and numerous articles in the field of IT law. His current research is related to artificial intelligence, internet of things and data protection law. 

October 14, 2022:
What I Learned from Dr. Fill
Matt Ginsberg, Google X

YouTube Video Coming Soon!

Abstract:
From 2012 through 2021, a computer program that I wrote called Dr.Fill participated in the American Crossword Puzzle Tournament, the premier crossword solving event in the world. In 2021, it won, thereby adding crosswords to the list of games at which computers outperform humans. This (primarily) nontechnical talk describes what this journey taught me – about search, about machine learning and large language models, about crosswords themselves, and about managing human interactions while displacing human champions at this very human pastime. And also a little bit about Jonathan Schaeffer. 


Presenter Bio:
Matt Ginsberg got a doctorate in astrophysics from Oxford when he was 24. He quickly came to his senses, however, switching to artificial intelligence and teaching at Stanford for a decade. He’s been on the front page of the New York Times and, surprisingly, was happy about it. He’s been a political columnist and published playwright, and constructs crosswords for the New York Times. He has written about a hundred technical papers, and one novel. 

October 7, 2022:

Technology Overview of Automated Shading Systems
Caitlyn Shum, AI Shading

*co-hosted with Technology Alberta*

YouTube Video Coming Soon!

Abstract:
With the increasing prevalence of heat waves and higher-than-average seasonal temperatures, commercial buildings struggle with overheating while consuming excessive energy for cooling. This is compounded by current architectural trends that favor high-rise buildings with high facade window areas. Automated shading systems have the capacity to reduce both peak and average energy consumption of such buildings by managing indoor solar exposure. AI Shading utilizes solar intensity alongside real-time weather inputs to create a customized control strategy to meet a building's unique energy reduction goals. Join us as we discuss the inner workings of our technology and our vision for technology development in the coming years. 


Presenter Bio:
Caitlyn is a Master's student in Mechanical Engineering at the University of Alberta. Passionate about building science and sustainability, her research focuses on the implementation of green retrofit strategies to improve building energy performance and indoor comfort. Caitlyn's current research project is in collaboration with AI Shading, an Alberta-based technology startup, which is working on a dynamic control strategy for automated window shades in commercial building applications.

September 30, 2022:
No Seminar

September 23, 2022:

Weakly-Supervised Questions for Zero-Shot Relation Extraction
Saeed Najafi, University of Alberta

https://www.youtube.com/watch?v=LF_kfiWP0aI&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=20 

Abstract:
Zero-Shot Relation Extraction (ZRE) is the task of Relation Extraction where the training and test sets have no shared relation types. This very challenging domain is a good test of a model's ability to generalize. Previous approaches to ZRE reframed relation extraction as Question Answering (QA), allowing for the use of pre-trained QA models. However, this method required manually creating gold question templates for each new relation. Here, we do away with these gold templates and instead learn a model that can generate questions for unseen relations. Our technique can successfully translate relation descriptions into relevant questions, which are then leveraged to generate the correct tail entity.  On tail entity extraction, we outperform the previous state-of-the-art by more than 16 F1 points without using gold question templates.  On the RE-QA dataset where no previous baseline for relation extraction exists, our proposed algorithm comes within 1.5 F1 points of a system that uses gold question templates. Our model also approaches the state-of-the-art ZRE performance on the FewRel dataset, showing that QA models no longer need template questions to match the performance of models specifically tailored to the ZRE task. 


Presenter Bio:
Saeed Najafi is a second-year Ph.D. student at the University of Alberta advised by Dr. Alona Fyshe. He has a keen interest in modelling language through machine learning and studies Question-Answering methods during his Ph.D. research. 

September 16, 2022:
Methods for Improving Diagnosis of Infectious Diseases using Metagenomic Next-Generation Sequencing 
Katrina Kalantar, Chan Zuckerberg Initiative 

https://www.youtube.com/watch?v=xGEgAIEoJgs&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=22 

Abstract:
Infectious diseases are a leading cause of morbidity and mortality worldwide. One persistent challenge in the mitigation of infectious diseases is in the ability to accurately diagnose the etiology of infections using standard clinical diagnostics. Metagenomic next-generation sequencing (mNGS) has transformed disease surveillance by enabling the rapid, unbiased detection and identification of microbes without pathogen-specific reagents, culturing, or a priori knowledge of the microbial landscape. However, mNGS data analysis requires a series of computationally intensive processing steps to accurately determine the microbial composition of a sample and downstream processing to translate the multidimensional data into actionable information. In this talk, I will present work by our team to develop tools and facilitate trainings that enable researchers around the world to analyze mNGS data, gaining further insight into the microbial composition of diverse sample types. I will then discuss how these tools have been applied alongside a variety of machine learning techniques to improve the diagnosis of infections in two distinct patient cohorts. In an initial study, these tools were applied to a cohort of 92 adults with acute respiratory failure due to infectious and non-infectious causes and the development of logistic regression models enabled improved diagnosis of lower respiratory tract infection. In a recent follow-up study, support vector machines were trained to classify patients with and without sepsis amongst a heterogeneous cohort of critically ill patients. Altogether, this talk will highlight the value of mNGS for diagnosis of infectious diseases, the tools that underlie the ability to develop, and considerations in early development of diagnostic tools. 


Presenter Bio:
Katrina Kalantar is currently a Computational Biologist at the Chan Zuckerberg Initiative. Her work focuses on development of tools for analysis of metagenomic next-generation sequencing (mNGS) in the context of infectious diseases. She is deeply involved in capacity building efforts aimed at expanding and supporting researchers’ ability to perform NGS and data analysis globally. Her previous research focused on applying mNGS to improve diagnosis of lower respiratory tract infections and she continues to be involved in research through collaboration with scientists working on translational applications of mNGS technology. 

September 9, 2022:

Representation-based Reinforcement Learning   
Bo Dai, Research Scientist, Google Brain

https://www.youtube.com/watch?v=vfYg_PvfO0s&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=19 

Abstract:
The majority reinforcement learning (RL) algorithms are largely categorized as model-free and model-based through whether a simulation model is used in the algorithm. However, both of these two categories have their own issues, especially incorporating with function approximation: the exploration with arbitrary function approximation in model-free RL algorithms is difficult; while optimal planning becomes intractable in model-based RL algorithms with neural simulators. In this talk, I will present our recent work on exploiting the power of representation in RL to bypass these difficulties. Specifically, we designed practical algorithms for extracting useful representations, with the goal of improving statistical and computational efficiency in exploration vs. exploitation tradeoff and empirical performance in RL. We provide rigorous theoretical analysis of our algorithm, and demonstrate the practical superior performance over the existing state-of-the-art empirical algorithms on several benchmarks. 


Presenter Bio:
Bo Dai is a staff research scientist in Google Research, Brain Team. He obtained his Ph.D. from Georgia Tech. His research interest lies in developing principled and practical machine learning methods for reinforcement learning. He is the recipient of the best paper award of AISTATS and NeurIPS workshop. He regularly serves as area chair or senior program committee member at major AI/ML conferences such as ICML, NeurIPS, AISTATS, and ICLR. 

September 2, 2022:
Lessons Learned Developing Predictive Models for Healthcare  
Robert Paproski, Chief Technology Officer, Nanostics Inc.  

*co-hosted with Technology Alberta*

https://www.youtube.com/watch?v=NlGW1fy_A-c&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=18 

Abstract:
Over the past decade, significant interest has developed for using machine learning in medical devices for assisting in the diagnosis and risk prediction of diseases. Many publications have demonstrated promising preliminary results although the path of deploying a predictive model in the clinic is challenging from a technological and regulatory perspective. This presentation will discuss Nanostics’ work developing ClarityDX Prostate, a medical device which predicts clinically significant prostate cancer, including the challenges of obtaining FDA approval. Training predictive models on large, diverse, clinical datasets is vital for developing trustworthy models although obtaining such datasets can be problematic. Potential solutions for working with large clinical datasets will be discussed. 


Presenter Bio:
Robert Paproski is the co-founder and Chief Technology Officer of Nanostics which develops medical devices using machine learning models. Since earning his B.Sc. in Pharmacology and Ph.D. in Oncology at the University of Alberta, Dr. Paproski has developed expertise in laboratory assay development and computational analysis. Within Nanostics, Dr. Paproski oversees machine learning experiments, software development, and regulatory compliance for software products. 

August 26, 2022:

Non-Autoregressive Unsupervised Summarization with Length-Control Algorithms  
Puyuan Liu , University of Alberta

https://www.youtube.com/watch?v=0LWIyqNYots&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=23 

Abstract:
Text summarization aims to generate a short summary for an input text and has extensive real-world applications such as headline generation. State-of-the-art summarization models are mainly supervised; they require large labeled training corpora and thus cannot be applied to less popular areas, where paired data are rare, e.g., less spoken languages. 

In this talk, I will we present a non-autoregressive unsupervised summarization model, which does not require parallel data for training. Our approach first performs edit-based search towards a heuristically defined score, and generates a summary as pseudo-groundtruth. Then, we train an encoder-only non-autoregressive Transformer based on the search results.   Further, we design two length-control algorithms for the model, which perform dynamic programming on the model output and are able to explicitly control the number of words and characters in the generated summary, respectively. Such length control is important for the summarization task, because the main evaluation metric for summarization systems, i.e., ROUGE score~\cite{lin2004rouge}, is sensitive to the summary length, and because real-word applications generally involve length constraints. Experiments on two benchmark datasets show that our approach achieves state-of-the-art performance for unsupervised summarization, yet largely improves inference efficiency. Further, our length-control algorithms are able to perform length-transfer generation, i.e., generating summaries of different lengths than the training target. 


Presenter Bio:
Puyuan Liu is currently a Master’s student at the Department of Computing Science, University of Alberta; he received a Bachelor’s degree in Computing Science from the same university in 2020. His research interest lies in Natural Language Processing, Deep Learning, and Reinforcement Learning. During the Master’s program, he published a paper on unsupervised summarization at ACL2022, a top-tier conference in natural language processing. 

August 19, 2022:
Brain + NLP > NLP? Towards the incorporation of human brain data into Natural Language Processing 
Alex Murphy, University of Birmingham

https://www.youtube.com/watch?v=wqmL3rJHWUw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=21 

Abstract:
Modern machine learning techniques have been shown to successfully encode / decode linguistic information from brain signals. It therefore seems a natural next step to use neurolinguistic data in ML models as an additional input stream to many NLP tasks. In the domain of vision, it has been shown that by forcing models to predict neural data (as well as the learned similarity representations from brain signals), models can become more robust and make “better / more natural mistakes”. This begs the question whether this effect transfers over to the domain of language / NLP with human data. This additional input stream provides many desirable properties, as the modelling process is less susceptible to idiosyncrasies of a single input modality (e.g. covariate shift, adversarial examples and non-robustness). In this talk I will recount my journey tackling these issues during my PhD, using models to decode linguistic and semi-linguistic information from single-trial EEG data, touching on various training methods that I have shown boost performance over directly training on single-trial EEG data. This work forms the core of my recent paper at ACL 2022 entitled, “Decoding PoS from Human EEG”. A key theme present throughout this talk is the effect of confounding that arises when working with linguistic data, both in terms of the linguistic status of stimuli (that also generate strong neural responses), but also biological confounds such as eye-movements, whose interference can be even further pronounced in EEG data. I will then summarize some of the main issues I believe still face us and reflect on how these might be surmounted by utilizing recent developments in multimodal Transformer networks and prediction-based representation learning in Reinforcement Learning. These techniques afford us a bridge in which both cognitive neuroscience and machine learning can jointly benefit from advances in the intersection of both domains. 


Presenter Bio:
Alex has recently completed his PhD at the University of Birmingham (UK) in Cognitive Neuroscience & NLP. His interests span many varied topics across (neuro)linguistics, cognitive neuroscience, machine learning and natural language processing. He holds a Bachelor's degree in Linguistics (Bangor University, Wales), alongside Masters degrees in Language Technology (University of Iceland) and IT & Cognition (University of Copenhagen, Denmark). During his PhD he was a Technical Intern on the Language Team at Google Brain (London) where he worked on analyzing brain data with state-of-the-art deep learning models. He is interested in how (human) neural data can be incorporated into natural language processing applications and more widely in other domains of artificial intelligence. 

August 12, 2022:

Decentralized Mean Field Games 
Sriram Ganapathi Subramanian, University of Waterloo

https://www.youtube.com/watch?v=_SoE7nlV3KU&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=17 

Abstract:
Multi-agent reinforcement learning algorithms have not been widely adopted in large-scale environments with many agents as they often scale poorly with the number of agents. Using mean field theory to aggregate agents has been proposed as a solution to this problem by prior works. However, almost all previous works in this area make a strong assumption of requiring a centralized learning system where all the agents in the environment obtain global observations and/or are effectively indistinguishable from each other (i.e., learn the same policy in the time limit). In this talk, I will provide a method that relaxes this assumption about requiring centralized learning protocols and propose a new mean field system known as Decentralized Mean Field Games, where each agent learns in a decentralized fashion based on their local observations, and can be quite different from others. Further, I will provide a theoretical solution concept and establish a fixed point guarantee for a Q-learning based iterative algorithm in this system. A practical consequence of our approach is that we can address a 'chicken-and-egg’ problem in empirical mean field reinforcement learning algorithms. Notably, it is possible to design efficient (function approximation based) Q-learning and actor-critic algorithms that use the decentralized mean field learning approach. Empirically, these algorithms give stronger performances compared to common baselines in this area. In this setting, agents do not need to be clones of each other and learn in a fully decentralized fashion. Hence, for the first time, the application of mean field learning methods can be extended to fully competitive environments, large-scale continuous action space environments, and other environments with heterogeneous agents. Importantly, I will also present an application of the mean field method in a ride-sharing problem using a real-world dataset. I will propose a decentralized solution to this problem, which is more practical than the centralized training approaches considered by prior research efforts. 


Presenter Bio:
Sriram is a PhD candidate in the department of Electrical and Computer Engineering at the University of Waterloo. He is also a postgraduate affiliate at the Vector Institute, Toronto. His primary research interest is in the area of multi-agent systems. Particularly, he is interested in the issues of scale, non-stationarity, communication, and sample complexity experienced by multi-agent learning algorithms. His research is motivated by the field of computational sustainability. His long-term research vision is to make multi-agent learning algorithms applicable to a variety of large-scale real-world problems and to bridge the widening gap between the theoretical understanding and empirical advances of multi-agent reinforcement learning.  

August 5, 2022:
How AI can improve human cooperation through suggesting follow-up action in modelled newscasts: The Human Cooperation Venture, countering Eliezer Yudkowsky's AGI Ruin and List of Lethalities 
Kim Solez, University of Alberta

https://www.youtube.com/watch?v=aGdgKq-9V68&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=16 

Abstract:
The background for the presentation is described in this YouTube video https://youtu.be/9m5kWBayClU . The August 5th presentation substitutes for the September 24th event described in the video. The draft PowerPoint is at https://www.slideshare.net/ksolez/kim-solez-how-ai-can-improve-human-cooperation-through-suggesting-followup-action-for-modelled-newscasts3pptx  Recently there has been concern expressed about the safety of machine learning/artificial intelligence in the long run by Eliezer Yudkowsky AGI Ruin: A List of Lethalities https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities widely quoted elsewhere. The first of the 40+ bolded sections of the piece is the most significant because of the way it is equally true of an AI utopia: “AGI will not be upper-bounded by human ability or human learning speed.  Things much smarter than humans would be able to learn from less evidence than humans require” Reading the beginning of that paragraph it is true that AI has taught humans much more beautiful moves in the game of Go than we would ever have been able to design ourselves. That means that AI can teach us ways of cooperating with each other that are superior to human’s own innate ability to cooperate successfully. Therefore, increasing use of machine learning is not the beginning of the slippery slope toward humanity’s demise, it could be exactly the opposite, a transition toward a world better than anything we ever imagined. We can determine which of these two contrasting futures happens, and AI can assist with that!  An AI may also be more foresighted and have a longer temporal horizon, both of which promote cooperation. 


Presenter Bio:
Kim Solez, MD is Professor of Pathology in the Faculty of Medicine and Dentistry at the University of Alberta, and Chair of the Regenerative Medicine Community of Practice in the American Society of Transplantation. He is an American pathologist and co-founder of the Banff Classification, the first standardized international classification for renal allograft biopsies. He is also the founder of the Banff Foundation for Allograft Pathology.

Kim Solez obtained his M.D. with AOA honours from the University of Rochester School of Medicine and Dentistry, and trained in pathology at Johns Hopkins Medical Institutions in Baltimore, Maryland where he was mentored in renal pathology by Robert Heptinstall. He joined the faculty at Johns Hopkins and in 1987 became chairman of the Department of Pathology at the University of Alberta in Edmonton, Canada. In 1991, he established the Banff Classification, the first standardized, international classification for renal allograft biopsies, with Johns Hopkins pathologist Lorraine Racusen. The Banff Classification, updated in regular intervals, continues to "set standards worldwide for how biopsies from kidney and other solid organ transplants are interpreted". He is the author of over 230 journal articles. He was awarded:

·       National Kidney Foundation International Distinguished Medal, 2009. 

·       University of Alberta Faculty of Medicine and Dentistry Tier 1 Clinical Mentoring Award, 2016. 

·       Catalan Society for Transplantation Gold Medal, 2017 

·       American Society of Transplantation Fellowship, FAST designation 2020.

In 2011 Kim Solez pioneered a unique graduate level medical course Technology and the Future of Medicine at the University of Alberta with strong AI content LABMP 590. Computing Science Professors Rich Sutton, Osmar Zaiane, Patrick Pilarski, and Russ Greiner teach in the course. Doctor Kim's The Future and All That Jazz musical and poetic spinoff of the LABMP 590 course with singer Mallory Chipman had its first product release on May 15, 2022. https://www.youtube.com/watch?v=QturFFBynhA Heart Drive – My Ex Was Made of Flesh. Rich Sutton attended a 2016 Future and All That Jazz event on the same day as the great AlphaGo success! Dr. Solez and Dr. Sutton also attended the Strathearn Fall Festival together in 2015 where Kim Solez performed AI related poetry https://www.youtube.com/watch?v=jTVO74FW7dc . 

July 29, 2022:
How to Avoid Fooling Ourselves in Deep RL research
Rishabh Agarwal, Google

https://www.youtube.com/watch?v=Yfy0zW7H8Uw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=16

Abstract:
I’ll talk about work published at NeurIPS’21, which received an outstanding paper award, where we find that statistical issues have a large influence on reported results on widely-used RL benchmarks. To help researchers do good and reliable science, I’ll discuss how to reliably evaluate and report performance on reinforcement learning (and ML) benchmarks, especially when using only a handful of runs. See agarwl.github.io/rliable for details. 

Presenter Bio:
Rishabh Agarwal is a research scientist in the Google Brain team in Montréal. Previously, he was an AI Resident in Geoff Hinton’s team at Google Toronto. His research interests mainly revolve around deep reinforcement learning (RL), often with the goal of making RL methods suitable for real-world problems. 

July 22, 2022:
Lexical Semantics: Why Theory Matters
Bradley Hauer, Ph.D Candidate, University of Alberta

https://www.youtube.com/watch?v=oNXkseddpuQ&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=15

Abstract:
Computational lexical semantics refers to the application of knowledge about word meaning to natural language processing. In this talk, we demonstrate that tasks and resources in lexical semantics benefit from theoretical analysis. First, we argue that three tasks commonly used to evaluate semantically-aware methods and models are equivalent, enabling data and methods developed for one task to be applied to the others. Second, we explore the linguistic phenomena underlying multilingual semantic resources, and posit that lexicalized concepts are universal, and thus can be annotated cross-linguistically in parallel corpora. Third, we demonstrate the utility of this theoretical analysis, by showing that it can be applied to automatically generate multilingual data for word sense disambiguation, a key semantic task. Taken together, our work makes the case that theoretical models which humans can understand, interpret, and test, are vital to semantics research.

Presenter Bio:
Bradley Hauer is a PhD candidate at the University of Alberta. He has published more than 20 papers in refereed venues, earning the NAACL 2013 Best Student Paper Award, and a nomination for the IEEE ICSC 2019 Best Paper Award. His work covers a variety of subjects in natural language processing, including multilingual semantics, classical decipherment, and cognate identification. His present research is motivated by a strong interest in theoretical models and explainable methods.

July 15, 2022:
Sparse Training in Supervised, Unsupervised, and Deep Reinforcement Learning
Decebal Constantin Mocanu, University of Twente
Elena Mocanu, University of Twente

https://www.youtube.com/watch?v=chyt-7P8oLw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=14

Abstract:
Part I: Sparse training in supervised and unsupervised deep learning (Decebal). The talk starts with a quick overview of my research line. Next, it introduces along this line one of the many challenges which prevent us from having truly scalable artificial neural networks at both levels, cloud and edge computing, i.e., dense connectivity. Next, an emerging state-of-the-art possible solution is presented, i.e., sparse-to-sparse training with static and dynamic sparsity. The discussion starts from the first works on complex Boltzmann machines [Mocanu et al., Machine Learning 2016] and sparse evolutionary training [Mocanu et al., Nature Communications 2018] in typical single task (un)supervised learning and gradually introduces newer approaches in the more challenging contexts of continual learning. Besides the fundamental theoretical novelty, some practical aspects, such as truly sparse implementations and deep learning energy efficiency are briefly considered. 

 

Part II: Sparse training in deep reinforcement learning (Elena). A fundamental task for artificial intelligence is learning. Up to now, everything was discussed in the context of supervised and unsupervised learning. Inspired by human learning, the reinforcement learning paradigm has high potential for autonomous agents, although it suffers from scalability issues [Mocanu et al. AAMAS 2021]. Further on, we introduce dynamic sparse training in deep reinforcement learning [Sokar et al., IJCAI 2022] and we pave the ground for scalable deep reinforcement learning. We describe some very recent progresses in the field that could be used to foster the generalization performance of sparsely trained RL agents over their densely trained counterparts, while at the same time, reducing considerably their computational and memory requirements in both, training and inference.

Presenter Bio:
Decebal Constantin Mocanu is an Assistant Professor in Artificial Intelligence and Machine Learning within the DMB group, EEMCS faculty at the University of Twente; and a Guest Assistant Professor within the DM group, M&CS department at TU Eindhoven. Currently, he is doing a research visit at the University of Alberta. He is an alumni member of TU Eindhoven Young Academy of Engineering. In 2017, Decebal received his PhD degree from TU Eindhoven. During his doctoral studies, he undertook three research visits at the University of Pennsylvania (2014), Julius Maximilians University of Würzburg (2015), and the University of Texas at Austin (2016). In the long term,  Decebal is interested in studying the synergy between artificial intelligence, neuroscience, and network science for the benefits of science and society.

 

Elena Mocanu is an Assistant Professor within the Department of Computer Science at the University of Twente, the Netherlands. Currently, she is visiting the group of Matthew Taylor at the University of Alberta. She received her PhD in machine learning from Eindhoven University of Technology, in 2017. During her PhD, she visited the University of Texas at Austin, where she worked with Michael Webber and Peter Stone on machine learning, decision making, and autonomous systems through the means of sparse neural networks. As a mathematician with a big passion for neural networks, her current research is focused on understanding neural networks and how their learning capabilities can be improved.

July 8, 2022:
Developing a Mental Health Virtual Assistance (Chatbot) for Healthcare Workers
Ali Zamani, University of Alberta

https://www.youtube.com/watch?v=g3Ojdaq7Y3o&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=12

Abstract:
Conversational agents (CAs) such as Amazon Alexa, Google Assistant, and Apple Siri have become popular in recent years. Conversational agents are computer systems designed to have natural conversations with human users, either for casual chatting or to provide the user with relevant information for a specific task. In this project, in partnership with the Mood Disorders Society of Canada (MDSC), the Asia-Pacific Economic Cooperation (APEC), Digital Hub for Mental Health, the University of Alberta, University of Saskatchewan, Dalhousie University, and the University of Alberta’s AI4Society Signature Area, we developed a mental health conversational agent (MIRA), aiming to assist healthcare workers and their families in finding relevant mental health information and local services in the Canadian provinces of Alberta and Nova Scotia. You can find the chatbot via mymira.ca.

Presenter Bio:
I am an M.Sc. student at the University of Alberta in computing science under the supervision of Osmar R. Zaiane. Also, I am an ML Engineer, Data Scientist, NLP researcher, and member of Amii (Alberta Machine Intelligence Institute) with 3+ years of research and industrial experience in linguistics, sentiment analysis, intent detection, entity extraction, and text classification. Moreover, I developed and implemented a mental health chatbot (mymira.ca) through my M.Sc. thesis using ML and NLP techniques to support healthcare workers affected by mental health issues. Through developing the mental health chatbot, I showed strong teamwork skills by collaborating with healthcare workers and computer science students. Also, I have advanced knowledge of software engineering with 3+ years of experience in developing different websites.v

June 24, 2022:
Bringing Alberta’s Tech Companies to You – Technology Alberta’s FIRST Jobs Program
Gail Powley, Technology Alberta

*co-hosted with Technology Alberta*

Abstract:
Technology Alberta works with leaders across industry, government, and academia – to grow Alberta’s Tech Sector, which is very actively hiring. By gathering the insights of AI/ML graduate students, professors and entrepreneurial tech companies – the Technology Alberta FIRST (First Industry Research Science Technology) Jobs program was created in 2020 to help university students and new graduates launch their professional careers by providing entry level part-time jobs with industry. Over 200 companies have participated – providing work experience in areas such as Product Development, AI/ML, Data Science, Data Analytics, Software Development and more. Student participants have reported outstanding results – such as: expanding their networks to include 50 local tech company presidents, subsequent part-time and full-time job offers, and interesting work experience, working for companies that develop software products that address: cyber-bullying, crisis detection, clean-tech, gaming, and more. Sponsored by Government of Alberta, Advanced Education – the Technology Alberta FIRST Job Program will be open to all Alberta post-secondary students in July 2022 – so be ready to apply, and learn more about the over 3000 entrepreneurial tech companies in the province – and be part of Alberta’s tech community! 

Presenter Bio:
Gail has over 30 years of experience, working for companies such as Proctor & Gamble, Matrikon (now Honeywell), Willowglen Systems – as a software engineer, product manager, tech company executive and leader of technology associations. Gail is an award-winning community builder being recognized for supporting diversity in the workforce, inclusive workplace policies, and volunteer efforts in the local tech sector. In her role with Technology Alberta, she has worked with a large team of volunteer and community builders to create programs that create opportunities for Alberta students in local companies – as well as tech leadership programs that help grow the tech sector overall. 

June 17, 2022:
Foundations of Hindsight Rational Learning for Sequential Decision-Making
Dustin Morrill, Amii

https://www.youtube.com/watch?v=hzURHdMxf-Y&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=11

Abstract:
Dustin's thesis develops foundations for the development of dependable, scalable reinforcement learning algorithms with strong connections to game theory. A key contribution is a proposal for a rationality objective for reinforcement learning that is grounded in the learner's experience and is connected with the rationality concepts of optimality and equilibrium. This notion of "hindsight rationality" is based on regret, a well-known concept for evaluating a sequence of decisions with unilateral deviations, and it demands resiliency to uncertainty, environmental changes, and adversarial pressures. In this talk, he will describe how particular natural sets of deviations can be constructed specifically for sequential decision-making settings to overcome computational challenges. Dustin shows how the strategic strength of special low-complexity deviation sets can be elevated with observable sequential rationality. He then presents a unifying algorithm: extensive-form regret minimization (EFR), which achieves observable sequential hindsight rationality for a broad and natural class of deviations. The EFR algorithm often performs better in practice when it uses stronger deviation types and EFR inherits the extensibility of the counterfactual regret minimization (CFR) algorithm. This talk outlines how his thesis provides the conceptual, theoretical, and algorithmic bases for practical research directions toward the advancement of both single and multi-agent reinforcement learning.

Presenter Bio:
Dustin is a research scientist at Sony AI and a Ph.D. candidate at the University of Alberta and the Alberta Machine Intelligence Institute (Amii), co-supervised by Professor Michael Bowling and Professor Amy Greenwald of Brown University. He works on multi-agent reinforcement learning and scaleable, dependable learning algorithms. He is currently working on the GT Sophy project at Sony AI, extending and improving algorithms that learn to outpace expert e-sports racers in a realistic racing simulator. He is also a coauthor of DeepStack, an expert-level player of heads-up, no-limit, Texas hold'em poker, and he created a public match interface for Cepheus, a near solution to heads-up, limit, Texas hold'em. He completed a B.Sc. and M.Sc. in computing science at the University of Alberta, also supervised by Michael Bowling. As an undergraduate, he worked with the Computer Poker Research Group at the University of Alberta to create an open-source web interface to play against poker bots and to develop the 1st-place 3-player Kuhn poker entry in the 2014 Annual Computer Poker Competition.

June 10, 2022:
Learning Models that Predict Objective, Actionable Labels
Russ GreinerUniversity of Alberta

https://www.youtube.com/watch?v=2sseUO_TyAw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=10

Abstract:
Many medical researchers want a tool that “does what a top medical clinician does, but does it better”. This presentation explores this goal. This requires first defining what “better” means, leading to the idea of outcomes that are “objective” and then to ones that are actionable, with a meaningful evaluation measure. We will discuss some of the subtle issues in this exploration – what does “objective” mean, the role of the (perhaps personalized) evaluation function, multi-step actions, counterfactual issues, distributional evaluations, etc.  Collectively, this analysis argues we should learn models whose outcome labels are objective and actionable, as that will lead to tools that are useful and cost-effective.

Presenter Bio:
Russ Greiner worked in both academic and industrial research before settling at the University of Alberta, where he is now a Professor in Computing Science and the founding Scientific Director of the Alberta Machine Intelligence Institute. He has been Program/Conference Chair for various major conferences, and has served on the editorial boards of a number of other journals. He was elected a Fellow of the AAAI, has been awarded a McCalla Professorship and a Killam Annual Professorship; and in 2021, received the CAIAC Lifetime Achievement Award and became a CIFAR AI Chair. In 2022, the Telus World of Science museum honored him with a panel, and he received the (UofA) Precision Health Innovator Award.  For his mentoring, he received a 2020 FGSR Great Supervisor Award.  He has published over 300 refereed papers, most in the areas of machine learning and recently medical informatics, including 5 that have been awarded Best Paper prizes. The main foci of his current work are (1) bio- and medical- informatics; (2) learning and using effective probabilistic models and (3) formal foundations of learnability.

June 3, 2022:
Learning to Accelerate by the Methods of Step-size Planning
Hengshuai Yao, University of Alberta

https://www.youtube.com/watch?v=KbgZ5_R1Sb0&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=9

Abstract:
Gradient descent is slow to converge for ill-conditioned problems and non-convex problems. An important technique for acceleration is step-size adaptation. In this talk, I'll review the efforts in attacking this problem in decades. Interested algorithms covered are Polyak step-size and IDBD from classical optimization and the first wave of neural networks, and Adam, Hypergradient descent, L4 and LossGrad from recent deep learning. We will also review the connection of step-size adaptation to meta learning. We will discuss the projection power of diagonal-matrix step-size, and show using negative step-sizes can lead to faster convergence even for deterministic gradient descent. In the end, we will discuss the possibility of applying the idea of Dyna-style planning for step-adaptation, for which a new algorithm solves the famous Rosenbrock function in under 500 gradient evaluations with zero error, while gradient descent needs 10000+ evaluations to reach an accuracy of 10^{-3}.

Presenter Bio:
Hengshuai Yao's research interest is model-based reinforcement learning with a recent focus on gradient descent especially step-size adaptation. He has brought a unique connection between the two seemingly unrelated topics.

May 27, 2022:
AI4Society: Exploring Challenges and Opportunities for AI Across Disciplines  
Eleni Stroulia,  University of Alberta

https://www.youtube.com/watch?v=jg8s-i7aaio&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=8

Abstract:
In December 2019, the UofA launched AI4Society as one of its five Signature Areas, conceived to take a holistic approach to the study of AI, including designing new data science, machine learning, and artificial intelligence algorithms, and developing appropriate computational platforms to implement them in real-world use cases, with awareness of the ethical concerns around data collection, and accountable and fair analysis. Since then AI4Society has seeded a number of interdisciplinary collaborations in a number of areas, such as manufacturing and construction, health, energy and clean technologies, business and finance, and education. In this presentation, I will talk about some of these initiatives and share some ways in which the community at large can be involved. 

Presenter Bio:
Dr. Eleni Stroulia is a Professor in the Department of Computing Science, at the University of Alberta. From 2011-2016, she held the NSERC/AITF Industrial Research Chair on Service Systems Management, with IBM. Her research focuses on addressing industry-driven problems, adopting AI and machine-learning methods to improve or automate tasks. Her flagship project in the area of health care is the Smart Condo in which she investigates the use of technology to support people with chronic conditions live independently longer and to educate health-science students to provide better care for these clients. In 2011, the Smart-Condo team received the UofA Teaching Unit Award. She has played leadership roles in the GRAND and AGE-WELL Networks of Centres of Excellence. in 2018 she received a McCalla professorship, and in 2019 she was recognized with a Killam Award for Excellence in Mentoring. She has supervised more than 60 graduate students and PDFs, who have gone forward to stellar academic and industrial careers. Since 2020, she is the Director of the University of Alberta's AI4Society Signature Area. Since 2021, she is serving as the Acting Vice Dean of the Faculty of Science. 

May 20, 2022:
Efficient Lifelong Machine Learning in Deep Neural Networks
Tyler Hayes, Rochester Institute of Technology

https://www.youtube.com/watch?v=zUwZ1vMn7Qw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=7

Abstract:
Humans continually learn and adapt to new knowledge and environments throughout their lifetimes. Rarely does learning new information cause humans to catastrophically forget previous knowledge. While deep neural networks (DNNs) now rival human performance on several supervised machine perception tasks, when updated on changing data distributions, they catastrophically forget previous knowledge. Enabling DNNs to learn new information over time opens the door for new applications such as self-driving cars that adapt to seasonal changes or smartphones that adapt to changing user preferences. In this talk, we propose new methods and experimental paradigms for efficiently training continual DNNs without forgetting. We then apply these methods to several visual perception tasks.

Presenter Bio:
Tyler Hayes recently defended her PhD in the Chester F. Carlson Center for Imaging Science at the Rochester Institute of Technology (RIT) in Rochester, NY. During her PhD, she worked with Dr. Christopher Kanan to advance online continual learning. Her current research interests include lifelong machine learning, computer vision, and computational mathematics. Previously, she earned a BS in Applied Mathematics from RIT in 2014 and an MS in Applied and Computational Mathematics from RIT in 2017. She has published in a wide range of venues in AI, including CVPR, ICRA, AAAI, ECCV, BMVC, and more. Her website can be found here: https://tyler-hayes.github.io/

May 13, 2022:
Machine Learning to Predict Survival and Treatment Outcomes for Cancer Patients
Ruchika Verma,  Alberta Machine Intelligence Institute

https://www.youtube.com/watch?v=mjjlf6vSFEQ&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=5

Abstract:
Manual assessment of medical images is challenging due to high intra- and inter-observer variability. With improvements in computer vision techniques and hardware, it is now possible to quantitatively assess subtle visual features in histopathology and diagnostic images that are usually difficult to evaluate manually. Tumor and nuclei segmentation is one of the key modules in histopathological image analysis that could facilitate downstream analysis of tissue samples for assessing not only cancer grades or stages but also for predicting tumor recurrence, treatment effectiveness, and for quantifying intra-tumor heterogeneity. Identifying different types of nuclei, such as epithelial, neutrophils, lymphocytes, macrophages, etc., could yield information about the host immune response that could advance our understanding of the mechanisms governing treatment resistance and adaptive immunity in cancers of various organs. This talk will give an overview of state-of-the art machine learning algorithms for nuclei segmentation and classification from H&E stained tissue images while providing insights into the process of creating one of the largest nuclei segmentation datasets and organizing two international competitions on this theme. I will also discuss a few projects on applying machine learning algorithms for (1) identification of co-existence of multiple molecular subtypes of breast cancer in a patient, (2) radiomics based treatment outcome prediction in Glioblastoma patients, and (3) personalized survival prediction from pan-cancer whole transcriptome data.

Presenter Bio:
Ruchika is an associate machine learning scientist at the Alberta Machine Intelligence Institute (Amii). Her research interests include machine learning and computer vision for digital healthcare and personalized medicine. She has applied machine learning algorithms to address various challenges in evidence-based personalized oncology such as tumor detection, generalized nuclei segmentation, survival, and treatment outcome prediction for cancer patients. She recently completed her PhD in Biomedical Engineering from Case Western Reserve University, Cleveland, Ohio, USA and received the Doctoral Excellence Award for her PhD thesis. She co-organized two international competitions on computational pathology focused on nuclei segmentation and classification. She also did academic internships at UCSF and UAlberta. Previously, she completed her masters and undergraduate studies in Electronics and Communication Engineering with specialization in digital signal processing and machine learning. She has also served as an instructor for 2 years in NIT Meghalaya, which is an institute of national importance funded by the Government of India. She was also a recipient of several prestigious awards including commonwealth scholarship 2017 by the UK Department for International Development (DFID).

May 6, 2022:
Engineering and Modeling at Trust Science
Mat Lavoie & Lenora Thomas, Trust Science

https://www.youtube.com/watch?v=koBeei2T350&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=4

Abstract:
Trust Science ® is a FinTech SaaS committed to using cutting-edge AI and ML technology to advance alternative credit underwriting for a fairer, more inclusive, financial future. FinTech for Good. We are a talented team of self-starters that thrive in a dynamic environment and are passionate about making the world more financially inclusive, one credit score at a time.

The presentation will discuss:

• Types of Modeling:

• Types of Models

• Example

• Engineering at Trust Science:

• Engineering Teams

• A Day in Engineering

• AI Job Types

Presenter Bios:
Mat Lavoie - VP Data Engineering - Mat is a University of Alberta Computer Science graduate, who has worked at a number of Edmonton based companies, growing from coding to executive roles including Founder and CTO. From an AI/ML perspective, Mat has worked to predict sporting event outcomes, learning systems (k-12) and with Trust Science is responsible for Data Engineering. As VP of Data Engineering, Mat is responsible for modeling, data science, initial analysis and data sourcing.

Lenora Thomas - VP of Customer Operations - Lenora is an accomplished professional with over 19 years of experience in various executive positions in high-technology/Software, with strategic leadership roles in customer success, support and operations.  As VP of Customer Operations at Trust Science, Lenora thrives in the challenges that come with planning, organizing, implementing and supporting customers as they embrace advanced solutions that fundamentally improve their businesses’ workflow and profitability.

April 29, 2022:
Data-Driven Emergence of Convolutional Structure in Neural Networks
Alessandro Ingrosso, Abdus Salam International Centre for Theoretical Physics

https://www.youtube.com/watch?v=UBzDVy8a0LM&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=6

Abstract:
Exploiting invariances in the inputs is crucial for constructing efficient representations and accurate predictions in neural circuits. In neuroscience, translation invariance is at the heart of models of the visual system, while convolutional neural networks designed to exploit translation invariance triggered the first wave of deep learning successes. While the hallmark of convolutions, namely localised receptive fields that tile the input space, can be implemented with fully-connected neural networks, learning convolutions directly from inputs in a fully-connected network has so far proven elusive. In this talk, I will show how initially fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs, resulting in localised, space-tiling receptive fields. Both translation invariance and non-trivial higher-order statistics are needed to learn convolutions from scratch. I will provide an analytical and numerical characterisation of the pattern-formation mechanism responsible for this phenomenon in a simple model, which results in an unexpected link between receptive field formation and the tensor decomposition of higher-order input correlations.

Presenter Bio:
Alessandro Ingrosso is a Senior Postdoctoral Fellow at The Abdus Salam International Centre for Theoretical Physics in Trieste, Italy. His current research focuses on computational neuroscience and machine learning, using methods from statistical physics of disordered systems. Alessandro studied cognitive psychology and physics. During his PhD, under the supervision of Riccardo Zecchina, he co-developed an analytical and computational framework for the study of generalization in neural networks based on local-entropy, an effective measure of flatness of the loss landscape. He moved to the U.S.A. for a postdoc at the Center for Theoretical Neuroscience, Columbia University, where he worked with Larry Abbott on computational and theoretical aspects of dynamics and learning in biologically plausible recurrent neural networks.

April 22, 2022:
Efficient and Robust Methods for Computing Trust in Multi-Agent Systems
Elham Parhizkar, University of Alberta

https://www.youtube.com/watch?v=rvc7k208ssw&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=2

Abstract:Trust and reputation systems constitute an active branch of research in multi-agent systems. In various application domains, agents interact with one another to collect information, goods, or services that help complete a set task. For such interactions to be largely successful, agents try to estimate how trustworthy other individual agents are. To evaluate the trustworthiness of an agent in a multi-agent system, one often combines two types of trust information: direct trust information derived from one's interactions with that agent, and indirect trust information based on advice from other agents.

In this work, we first focus on the trust established through indirect trust information. This is a non-trivial problem, since advice may not always be reliable; it may come from a deceptive agent whose goal is to mislead the truster. We propose a new and easy-to-implement method for computing indirect trust, based on a simple prediction from expert advice strategy as is often used in online learning. This method either competes with or outperforms all tested systems in most of the simulated settings while scaling substantially better. We also provide the first systematic study on when it is beneficial to combine the two types of trust as opposed to relying on only one of them. Our large-scale experimental study shows that strong methods for computing indirect trust make direct trust redundant in a surprisingly wide variety of scenarios. Further, a new method for the combination of the two trust types is proposed that, in the remaining scenarios, outperforms the ones known from the literature. Moreover, we propose a method based on the Page-Hinkley statistics to handle the dynamic behaviour of agents in a multi-agent system.

Presenter Bios: Elham Parhizkar is a postdoctoral fellow at the University of Alberta, where she is working under the supervision of Dr. Levi Lelis. Currently, she is focused on developing algorithms for synthesizing effective and interpretable strategies for solving problems and playing games.

She completed her Ph.D. from the University of Regina under the supervision of Dr. Sandra Zilles in 2021. Her Ph.D. research focused on establishing trust between agents in multi-agent systems. 

April 15, 2022:
No Seminar

April 8, 2022:
Simple Agent, Complex Environment: Efficient Reinforcement Learning with Agent States
Shi Dong, Stanford University

https://www.youtube.com/watch?v=sn-sJppDlw4&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=1

Abstract:
We design a simple reinforcement learning (RL) agent that implements an optimistic version of Q-learning and establish through regret analysis that this agent can operate with some level of competence in any environment. While we leverage concepts from the literature on provably efficient RL, we consider a general agent-environment interface and provide a novel agent design and analysis. This level of generality positions our results to inform the design of future agents for operation in complex real environments. We establish that, as time progresses, our agent performs competitively relative to policies that require longer times to evaluate. The time it takes to approach asymptotic performance is polynomial in the complexity of the agent's state representation and the time required to evaluate the best policy that the agent can represent. Notably, there is no dependence on the complexity of the environment. The ultimate per-period performance loss of the agent is bounded by a constant multiple of a measure of distortion introduced by the agent's state representation. This work is the first to establish that an algorithm approaches this asymptotic condition within a tractable time frame.
Presenter Bios: 

Shi Dong is currently a Ph.D. candidate in the Department of Electrical Engineering at Stanford University, where he is advised by Prof. Benjamin Van Roy.  He is interested in using theoretical tools to understand the essential elements in practical reinforcement learning (RL) agent design, and to help bring the benefits of RL to real life.  He received his bachelor’s degree from Tsinghua University, and master’s degree from the Department of Statistics at Stanford University.  His industrial experiences include assisting ByteDance in improving their news and video recommendation system, and research internships at Google, DeepMind, and Microsoft.  One of his works was awarded winner in the 2021 INFORMS George Nicholson Student Paper Competition.

April 1, 2022:
Conversational AI at Intuit Edmonton
Greg Coulombe & Horace Chan, Intuit Edmonton

*co-hosted with Technology Alberta*

https://www.youtube.com/watch?v=-Yv1uaFoua8&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=3

Abstract: Intuit has invested significantly in our Conversational Experiences platform, giving us the ability to build intelligent conversational agents that can help customers answer their most pressing questions while using our products. This talk will focus on an overview of our conversational platform and three specific AI-enhanced experiences that help our customers get a more personalized, relevant experience while using our products. We will talk about platform architecture and the models used to provide personalized content, automated question answering for long-tail questions, and automated identification of customers based on limited data signals.


Presenter Bio: Greg Coulombe is the Director of Development for the Intuit Futures organization and is based in the Edmonton office. Greg is a long-time Intuit team member with experience working in all areas of the business, including full-stack development on the TurboTax and QuickBooks products, as well as leadership experience across several aspects of Intuit’s platform. Greg’s areas of interest include AI and conversational assistants, geospatial data and maps, image processing and data extraction, and user-centric software design.


Horace Chan is the Software Development Manager for the Conversational Experience team based in the Edmonton office.  Horace has grown his skills and career at University of Alberta and Intuit with experience in highly scaled web development that leverage AIML to deliver customer benefit and business impact.  Horace’s areas of interests include applied AIML, conversational experiences and software architecture.

March 25, 2022:
Machine Learning at Electronic Arts

Bill Gordon, Electronic Arts Orlando

Alex Lucas, BioWare Studios Edmonton

*co-hosted with Technology Alberta*

https://www.youtube.com/watch?v=V_Oyl5cx4Mc

Abstract: This seminar will highlight many of the interesting applications of Machine Learning in use and being explored at Electronic Arts, a leading gaming publisher. Just as machine learning is transforming many industries, the gaming industry is finding exciting ways to address many of the unique challenges that game software development presents, and EA is making great investments into the space. This seminar will also provide insight into EA’s commitment to attracting talent and should answer many questions about how one might join the organization as an AI/ML developer or scientist.

Presenter Bios:

William Gordon (Bill) is a senior software engineer with Electronic Arts (EA), working in AI and rendering. He currently works on a DTS research and applications team conducting practical machine learning applications for advancing gameplay AI and game validation for all EA games. Bill’s currently is working on ML projects that include: Object-Detection, Object tracking, Image Classification, Image segmentation, Imitation Learning (IL), Reinforcement Learning (RL) and Natural Language Processing (NLP). Before joining EA, Bill worked at Disney and other mobile start-ups in a variety of low-level embedded software and mobile app development roles. Bill has a B.S.E.E. from Purdue University. He is a Disney Inventor and has patents pending with EA. In his free time, Bill loves spending time at the beach with his wife and kids in sunny Florida.


Alex Lucas is the Director of Quality Verification for BioWare Studios. Although a Calgary native, he’s been with BioWare in Edmonton for the past 18 years, first as a software engineer and later as both a technical and department director. Alex has for many years been an advocate for leveraging relationships with academia and research (most notably with the University of Alberta and the NRCC) and has led many partnership engagements in the ML space. He has a BSc in Computer Science from the University of Calgary and several patents with EA. He is a new, first-time dad, and very tired.


March 18, 2022:
GPEX, A Framework For Interpreting Artificial Neural Networks -
Amir Hossein Hosseini Akbarnejad, University of Alberta*

https://www.youtube.com/watch?v=08zjKrrGosU

Abstract: Machine learning researchers have long noted a trade-off between interpretability and prediction performance. On the one hand, traditional models are often interpretable to humans but they cannot achieve high prediction performances. At the opposite end of the spectrum, deep models can achieve state-of-the-art performances in many tasks but their predictions are known to be uninterpretable to humans. We present a framework that shortens the gap between the two aforementioned groups of methods. Given an artificial neural network (ANN), our method finds a Gaussian process (GP) whose predictions almost match those of the ANN. As GPs are highly interpretable, we use the trained GP to explain the ANN’s decisions. The explanations provide intriguing insights about the ANNs’ decisions. We examine some of the known theoretical conditions under which an ANN is interpretable by GPs. Some of those theoretical conditions are too restrictive for modern architectures. However, we hypothesize that only a subset of those theoretical conditions are sufficient. We implement our framework as a publicly available tool called GPEX: www.github.com/amirakbarnejad/gpex

Presenter Bio: Amir Akbarnejad is a PhD student at the University of Alberta working under supervision of Nilanjan Ray and Gilbert Bigras. His research is about machine learning and computer vision, with focus on histopathology data.


March 11, 2022:
Towards High-fidelity 3D Modeling and Animation -
Xinxin Zuo, University of Alberta

No recording available

Abstract: We take lots of pictures everyday with our mobile phones where the images are 2D arrays storing the color values. But we live in a 3D world and the ability to reason about the 3D properties is the basic technique for accomplishing tasks such as navigation, object manipulation and scene understanding, etc. In this talk, she will present some of her recent works on designing low-cost and convenient 3D modeling systems to build high quality 3D models such as human avatars, by exploiting machine learning techniques. And stepping forward from the modeling problem, she will also share some interesting works on model animation.

Presenter Bios: Xinxin Zuo is currently a Postdoctoral Fellow at the Vision and Learning Lab, University of Alberta. She received her Ph.D degree from the Department of Computer Sciences, University of Kentucky, in 2019. Before that, she received B.Eng. and M.Eng. degrees from the School of Computer Science, Northwestern Polytechnical University.  Her research interests include Machine Learning, Computer Vision and Computer Graphics, in particular in 3D vision. She has published 20+ papers on related areas, including top journals such as TPAMI/IJCV/TMM and top conferences like CVPR/ICCV/ECCV.


March 4, 2022:
SIMLR, Machine Learning inside an SIR model for COVID-19 forecasting -
Roberto Vega, University of Alberta

https://www.youtube.com/watch?v=8Dk-O4OeEzc&list=PLKlhhkvvU8-ZYAernGzP2ZKMz1RILmXd6&index=6

Abstract: Accurate forecasts of the number of newly infected people during an epidemic are critical for making effective timely decisions. This paper addresses this challenge using the SIMLR model, which incorporates machine learning (ML) into the epidemiological SIR model. For each region, SIMLR tracks the changes in the policies implemented at the government level, which it uses to estimate the time-varying parameters of an SIR model for forecasting the number of new infections one to four weeks in advance. It also forecasts the probability of changes in those government policies at each of these future times, which is essential for the longer-range forecasts. We applied SIMLR to data from in Canada and the United States, and show that its mean average percentage error is as good as state-of-the-art forecasting models, with the added advantage of being an interpretable model. We expect that this approach will be useful not only for forecasting COVID-19 infections, but also in predicting the evolution of other infectious diseases.

Presenter Bio: Roberto Vega is a PhD candidate at the University of Alberta working under the supervision of Russ Greiner. His research focuses on how to combine machine learning with medical expert knowledge to learn accurate predictive models. He also collaborates with the local startup MEDO.ai, where he works alongside their AI team to automatically analyze ultrasound images for the detection of health problems.


Feb 25, 2022:
Towards Adaptive Model-Based Reinforcement Learning -
Yi Wan, University of Alberta

https://www.youtube.com/watch?v=L8BgNLzhPjE&list=PLKlhhkvvU8-ZYAernGzP2ZKMz1RILmXd6&index=3

Abstract: In recent years, a growing number of deep model-based reinforcement learning (RL) methods have been introduced. The interest in deep model-based RL is not surprising, given their many potential benefits, such as higher sample efficiency and the potential for fast adaption to changes in the environment. However, we demonstrate, using an improved version of the recently introduced Local Change Adaptation (LoCA) setup, that the well-known model-based methods such as PlaNet and DreamerV2 perform poorly in their ability to adapt to local environmental changes.  Combined with prior work that made a similar observation about the other popular model-based method, MuZero, a trend appears to emerge suggesting that current deep model-based methods have serious limitations. We dive deeper into the causes of this poor performance, by identifying elements that hurt adaptive behavior and linking these to underlying techniques frequently used in deep model-based RL. We empirically validate these insights in the case of linear function approximation by demonstrating that a modified version of linear Dyna achieves effective adaptation to local changes. Furthermore, we provide detailed insights into the challenges of building an adaptive non-linear model-based method, by experimenting with a non-linear version of Dyna.

Presenter Bio:
Yi Wan is a fifth-year Ph.D. candidate in Computing Science at the University of Alberta, focusing on reinforcement learning, which he believes is the most promising way to artificial general intelligence. His Ph.D. supervisor is Professor Rich Sutton. His long-term research goal is to build simple, general, and scalable learning and planning algorithms for reinforcement learning problems. He is particularly interested in designing these algorithms 1) for the average reward problem setting, 2) with function approximation, and 3) with temporal abstractions. Previously, he earned a Bachelor degree in Electrical and Computer Engineering (ECE) from Shanghai Jiao Tong University (SJTU), where he worked in SJTU Speech Lab, advised by Professor Kai Yu.  After that, he obtained a master degree, also in ECE, from University of Michigan. During his master, he worked in Intelligent Robotics Lab, advised by Professor Ben Kuipers. During his master and Ph.D., he interned as a researcher in Mila, Huawei, and Tusimple and also interned as an engineer in Yitu.

Feb 18, 2022:
Machine Learning Early Stage Projects – lessons learned -  
John Murphy, Bio-Stream

No recording available

Abstract:
During AMII’s ECO System project, companies wondering if Machine Learning could be of value, worked with supporting groups like StreamML together exploring how Machine Learning might fit. Both StreamML tools and Bio-Stream Diagnostics Inc’s medical device example are highlighted to share some lessons learned.

Presenter Bio:  John Murphy, a graduate from the UofA, is CEO and co-founder of Bio-Stream Diagnostics Inc, a startup in the pathogen detection space. Leveraging Stream Technologies Inc., a company he also founded, in machine learning. John has over thirty years of technology commercialization experience including founding and growing multiple Alberta based startups, a mix of successful exits, failures, and some too early to tell. He is active on several advisory boards, Chairman of nanocluster Alberta, is a local angel investor, was on the founding board of the A100 Organization here in Alberta.

Feb 11, 2022:
The Increasing Role of Sensorimotor Experience in Artificial Intelligence -
Rich Sutton, DeepMind; University of Alberta

https://www.youtube.com/watch?v=r6o05gOOtpg

Abstract: We receive information about the world through our sensors and influence the world through our effectors. Such low-level experiential data has gradually come to play a greater role in AI during its 70-year history. I see this as occurring in four steps, two of which are mostly past and two of which are in progress or yet to come. The first step was to view AI as the design of agents which interact with the world and thereby have sensorimotor experience; this viewpoint became prominent in the 1980s and 1990s. The second step was to view the goal of intelligence in terms of experience, as in the reward signal of optimal control and reinforcement learning. The reward formulation of goals is now widely used but rarely loved. Many would prefer to express goals in non-experiential terms, such as reaching a destination or benefiting humanity, but settle for reward because, as an experiential signal, reward is directly available to the agent without human assistance or interpretation. This is the pattern that we see in all four steps. Initially a non-experiential approach seems more intuitive, is preferred and tried, but ultimately proves a limitation on scaling; the experiential approach is more suited to learning and scaling with computational resources. The third step in the increasing role of experience in AI concerns the agent’s representation of the world’s state. Classically, the state of the world is represented in objective terms external to the agent, such as “the grass is wet” and “the car is ten meters in front of me”, or with probability distributions over world states such as in POMDPs and other Bayesian approaches. Alternatively, the state of the world can be represented experientially in terms of summaries of past experience (e.g., the last four Atari video frames input to DQN) or predictions of future experience (e.g., successor representations). The fourth step is potentially the biggest: world knowledge. Classically, world knowledge has always been expressed in terms far from experience, and this has limited its ability to be learned and maintained. Today we are seeing more calls for knowledge to be predictive and grounded in experience. After reviewing the history and prospects of the four steps, I propose a minimal architecture for an intelligent agent that is entirely grounded in experience.

Presenter Bio:
Richard S. Sutton is a distinguished research scientist at DeepMind, a professor in the Department of Computing Science at the University of Alberta, and a fellow of the Royal Society (UK), the Royal Society of Canada, the Association for the Advancement of Artificial Intelligence, the Alberta Machine Intelligence Institute (Amii), and CIFAR. Sutton received a PhD in computer science from the University of Massachusetts in 1984 and a BA in psychology from Stanford University in 1978. Prior to joining the University of Alberta in 2003, he worked in industry at AT&T Labs and GTE Labs, and in academia at the University of Massachusetts.  In Alberta, Sutton founded the Reinforcement Learning and Artificial Intelligence Lab, which now consists of ten principal investigators and about 100 people altogether. He joined DeepMind in 2017 to co-found their first satellite research lab, in Alberta. Sutton is co-author of the textbook Reinforcement Learning: An Introduction from MIT Press. His research interests center on the learning problems facing a decision-maker interacting with its environment, which he sees as central to intelligence. He has additional interests in animal learning psychology, in connectionist networks, and generally in systems that continually improve their representations and models of the world. His scientific publications have been cited more than 100,000 times. He is also a libertarian, a chess player, and a cancer survivor.

Jan 21, 2022:
(Near)-optimal Regret Bound for Differentially Private Thompson Sampling - 
Bingshan Hu, University of Alberta

https://www.youtube.com/watch?v=ddyFmyIPEHU&list=PLKlhhkvvU8-ZYAernGzP2ZKMz1RILmXd6&index=4

Abstract:
A Multi-armed bandit problem is a classical sequential decision-making problem in which the goal is to accumulate as much reward as possible. In this learning model, only a limited amount of information is revealed in each round. The imperfect feedback model results in the learning algorithm being in a dilemma between exploration (gaining information) and exploitation (accumulating reward). Thompson Sampling is one of the classical learning algorithms that can make a good balance between exploration and exploitation and it always has a very competitive empirical performance.
In the standard non-private learning, the learning algorithm can always get access to the true revealed information to make future decisions. However, if the revealed information is about individuals, to preserve privacy, the decisions made by the learning algorithm should not rely on the true revealed information. In this talk, I will present a Thompson Sampling-based algorithm, DP-TS, for private stochastic bandits. The regret upper bound for DP-TS matches the discovered regret lower bound up to an extra loglogT factor.

Presenter Bio: Bingshan Hu is an Amii Postdoctoral Fellow co-hosted by Prof. Nidhi Hegde from University of Alberta and Prof. Mark Schmidt from University of British Columbia. She completed her PhD from University of Victoria under the supervision of Prof. Nishant Mehta in 2021. Her research lies in the theoretical side of machine learning, aiming at devising efficient and private online learning algorithms. She serves as reviewers for conferences such as NeurlPS, ICML, and AISTATS. She was recognized as one of the top 10% of high-scoring reviewers at NeurIPS 2020.
Prior to pursuing her PhD studies, she worked in industry research labs as a wireless technology specialist. She invented/co-invented around 20 patents with more than half of them having been granted by either the European patent office or US patent office. Besides the foundations of online learning, she is also interested in the usage of online learning in novel applications in wireless networks.

Jan 14, 2022:
Leveraging AI to detect crisis events - 
Sky Sun & Adam St Arnaud, Samdesk

*co-hosted w/ Technology Alberta*

No recording available

Abstract: Samdesk is an Edmonton based start up that uses AI to detect and monitor global crises events in real time. Our system ingests a range of social media and traditional web sources, and the scope of problems that our AI tackles includes text classification, source trustworthiness estimation, input geo-localization, text similarity, and clustering. In this talk, we present how Samdesk helps our customers act smarter and faster, we examine some of the main ML and AI systems within samdesk, and we discuss some practical lessons that we have picked up in terms of gathering and processing data and applying AI in production.

Presenter Bios:
Sky Sun finished her MA in Digital Humanities at the U of A and is working full time at samdesk as Data Intel Team Lead. Her role at samdesk is to enrich and manage data to support AI performance. She is passionate about taking a humanistic and interdisciplinary approach to solving technical problems.
Adam St Arnaud received his MSc in Computing Science at the U of A in 2017 under the supervision of Greg Kondrak. Adam is a Lead Machine Learning Engineer at samdesk, where he has been working since 2017.

June 24, 2022:
Bringing Alberta’s Tech Companies to You – Technology Alberta’s FIRST Jobs Program
Gail Powley, President Technology Alberta

https://www.youtube.com/watch?v=cJFlL6xcVWQ&list=PLKlhhkvvU8-bFpoIPFTCVsYmqFVpt7OTz&index=13

Abstract:
Technology Alberta works with leaders across industry, government, and academia – to grow Alberta’s Tech Sector, which is very actively hiring. By gathering the insights of AI/ML graduate students, professors and entrepreneurial tech companies – the Technology Alberta FIRST (First Industry Research Science Technology) Jobs program was created in 2020 to help university students and new graduates launch their professional careers by providing entry level part-time jobs with industry. Over 200 companies have participated – providing work experience in areas such as Product Development, AI/ML, Data Science, Data Analytics, Software Development and more. Student participants have reported outstanding results – such as: expanding their networks to include 50 local tech company presidents, subsequent part-time and full-time job offers, and interesting work experience, working for companies that develop software products that address: cyber-bullying, crisis detection, clean-tech, gaming, and more. Sponsored by Government of Alberta, Advanced Education – the Technology Alberta FIRST Job Program will be open to all Alberta post-secondary students in July 2022 – so be ready to apply, and learn more about the over 3000 entrepreneurial tech companies in the province – and be part of Alberta’s tech community!

Join us at this session to hear more about the program – and to offer any program ideas on what you would like to see. 

https://technologyalberta.com/?page_id=1440

AI/ML Tech Companies that have participated in the past include: AltaML, Areto Labs, Biostream, Chata.AI, HealthGauge, Honest Door, RunwithIT, StreamML – and many more.

Presenter Bio:
Gail has over 30 years of experience, working for companies such as Proctor & Gamble, Matrikon (now Honeywell), Willowglen Systems – as a software engineer, product manager, tech company executive and leader of technology associations. Gail is an award-winning community builder being recognized for supporting diversity in the workforce, inclusive workplace policies, and volunteer efforts in the local tech sector. In her role with Technology Alberta, she has worked with a large team of volunteer and community builders to create programs that create opportunities for Alberta students in local companies – as well as tech leadership programs that help grow the tech sector overall.