Past Seminars: 2019

December 20, 2019: Off-policy Reinforcement Learning via Duality Lens
- Bo Dai,
Google Brain

Abstract: In many real-world reinforcement learning applications, access to the underlying dynamic environment is limited to a fixed set of data that has already been collected, without information about the collecting procedure and additional interaction with the environment being available. Designing RL algorithms for such an off-policy setting is one of the keys to make RL practical. In this talk, we reveal an important duality between Q-function and stationary state-action distributions, which leads to a series of off-policy policy estimators, including DualDICE and GenDICE, and a novel off-policy policy improvement algorithm, AlgaeDICE. In addition to providing theoretical guarantees, we present an empirical study of our algorithm applied to off-policy policy evaluation/improvement and find that the performance of our algorithm significantly improves, compared to existing techniques.

Bio: Bo Dai is a research scientist in Google Brain. He is the recipient of the best paper award of AISTATS 2016 and NIPS 2017 workshop on Machine Learning for Molecules and Materials. His research interest lies in developing principled (deep) machine learning methods using tools from optimization, especially on reinforcement learning and automatic algorithm design, as well as various applications.

December 6, 2019: Building Meaningful Educational Testing Software through User Experience Design and AI
- Mike Priest, TenSpeed Technologies
*co-hosted w/ Technology Alberta*

Abstract: TechnologyAlberta is pleased to feature - TenSpeed Technologies (10spd.com) a product-driven technology company with offices in Amsterdam and the Advanced Technology Centre (ATC) in the Edmonton Research Park. One of their solutions focuses on high-stakes examinations; with the use of artificial intelligence, deeper insights is provided into candidate intelligence while tracking down potential examination fraud. TenSpeed's vision is to provide cognitive student profiles rather than traditional A/B/C scoring structures. With a real understanding of candidate potential, students can be guided into careers where they can excel and enjoy.

Bio: Mike Priest is an entrepreneur and technologist who immigrated to Canada in 2008 from Liverpool, England. Mike founded TenSpeed in 2014 and has launched 18 software products across Europe and North America. Specializing in process and document-driven solutions, Mike's latest achievements were the acquisition of JumpSeat training software in 2017 and Trifork Learning in 2019. Mike runs the TenSpeed Team from their Edmonton offices.

November 29, 2019: Advancing medical image analysis with machine learning -
Kumaradevan Punithakumar, University of Alberta

Abstract: Medical image analysis is an emerging field aiming to automate the processing of medical images from modalities such as magnetic resonance imaging, computed tomography and ultrasonography. Currently, the majority of the clinical image processing tasks are performed with tedious, time-consuming and error-prone manual processing. Machine learning has recently emerged as a promising alternative to solve many medical image assessment tasks. Dr. Punithakumar will present the progress to date on the development of machine learning methods for solving fundamental tasks such as segmentation, registration and classification as well as future directions to advance medical image interpretation and analysis.

Bio: Kumar Punithakumar is an Assistant Professor in the Department of Radiology and Diagnostic Imaging, University of Alberta and the Operational and Computational Director in the Servier Virtual Cardiac Center, Mazankowski Alberta Heart Institute. He received the B.Sc.Eng. degree (Hons.) from the University of Moratuwa and the M.A.Sc. and Ph.D. degrees in electrical and computer engineering from McMaster University. Prior to joining the University of Alberta, he was an imaging research scientist with GE Healthcare from 2008 to 2012. His research interests include machine learning, medical image analysis and visualization, information fusion, and object tracking.

November 22, 2019: Artificial intelligence to automate ultrasound medical image analysis:
The 21st century stethoscope
- Jacob Jaremko,
University of Alberta

Abstract: Because AI can now accept medical images directly as inputs to identify anatomy and pathology, many teams are working on AI analysis of CT, MRI and X-ray. Ultrasound images are more difficult to work with, but unlike CT/MRI, ultrasound probes are inexpensive and portable. Ultrasound could become the 21st century stethoscope, with handheld wireless probes generating images interpreted by AI. This can transform medical care. As a candidate for the CIFAR AI Chair, Dr. Jaremko will outline progress to date, his vision for the future of AI in ultrasound, and why we at UofA in Edmonton are particularly well equipped to lead in this field.

Bio: Jacob Jaremko is a Radiologist, Associate Professor and AHS Endowed Chair at UofA. He obtained an MD and a PhD in Biomedical Engineering at the University of Calgary, residency training in Diagnostic Radiology at UofA, and two clinical fellowships: Pediatric Radiology at the Royal Children's Hospital in Melbourne, Australia, and Musculoskeletal Radiology at Massachusetts General Hospital in Boston, USA. His background in deep learning began in 1999 with use of genetic algorithm-neural networks for his PhD thesis. His main research interests are in the use of artificial intelligence to automate image analysis, particularly of ultrasound, and the development and degeneration of bones and joints.

November 15, 2019: Provable Reinforcement Learning From Limited Data
- Mengdi Wang,
Princeton University

Abstract: Despite phenomenal empirical successes, many theoretical questions about RL were not fully understood. For example, how many observations are necessary and sufficient for learning a good policy? How to learn to control using structural information with provable regret? In this talk, we discuss some recent progress on RL theory. (1) We study the statistical efficiency of RL, with and without structural information such as linear feature representation, and show how to algorithmically learn the optimal policy with (nearly) information-theoretical optimal sample complexity. (2) Complexity of RL algorithms largely depend on dimension of state representations. Towards reducing the dimension of RL, we discuss a statistical state embedding method that automatically learns state features and aggregation structures from trajectory data, in order to embed the conditional transition distributions in low-dimensional space.

Bio: Mengdi Wang is a visiting research scientist who just joined DeepMind. She is on sabbatical leave from Princeton University where she is an associate professor at the Center for Statistics and Machine Learning, Department of Operations Research and Financial Engineering, Department of Computer Science. Mengdi's current research focuses on theoretical foundations of reinforcement learning, developing accelerated algorithms, as well as applications in finance and medical research. Find more about her at http://mwang.princeton.edu/

November 8, 2019: Explanation in AI: From Machine Learning to Knowledge Representation & Reasoning and Beyond
- Freddy Lecue,
CortAIx

Abstract: The term XAI refers to a set of tools for explaining AI systems of any kind, beyond Machine Learning. Even though these tools aim at addressing explanation in the broader sense, they are not designed for all users, tasks, contexts and applications. This presentation will describe progress to date on XAI by reviewing its approaches, motivation, best practices, industrial applications, and limitations.

Bio: Dr. Freddy Lecue is the Chief Artificial Intelligence (AI) Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise) at Thales in Montreal - Canada. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. Before joining the new R&T lab of Thales dedicated to AI, he was AI R&D Lead at Accenture Labs in Ireland from 2016 to 2018. Prior joining Accenture he was a research scientist, lead investigator in large scale reasoning systems at IBM Research from 2011 to 2016, a research fellow at The University of Manchester from 2008 to 2011 and research engineer at Orange Labs from 2005 to 2008.

November 1, 2019: Solving the Global Health Issue of Pregnancy Induced Hypertension through the use of Wearable AI/ML technologies
- Imran Ahmed, Randy Duguay
, & Bruce Matichuk,
Nutrition and Clinical Services; AiDANT Intelligent Technology; Health Gauge

Abstract: Pregnancy induced hypertension (PIH), or eclampsia, can complicate 5-8% of all pregnancies which can lead to maternal, fetal and neonatal mortality and morbidity. Conventional practice of prenatal care in low- and middle-income countries often delays or misses diagnosing PIH which makes the women vulnerable to its adverse consequences. Over the course of this year, the International Center of Diarrheal Disease Research (ICDDR,B) based in Dhaka, Bangladesh – worked with Health Gauge, an Edmonton-based company focused on applications of AI/ML software for ‘personal health monitoring and management’. Together ICDDR and Health Gauge have been testing the applicability and efficacy of simple, inexpensive ECG & PPG sensors in a wrist wearable and machine learning tools to derive Blood Pressure and other biometrics – within a regular care program. This presentation will provide materials relating to current experience and proposed steps as the project moves into the next stage of research.

This is an initiative funded by the Bill & Melinda Gates Foundation.

Bio: Dr. Imran Ahmed is a medical doctor and clinical researcher, currently working in the Nutrition and Clinical Services Division of icddr,b. After he earned his MPH degree from James P Grant School of Public Health (JPGSPH), BRAC University, he joined JPGSPH as a senior research associate. While working at JPGSPH, he worked under the supervision of Dr. Malabika Sarker where he focused on neonatal health and developed a neonatal handbook which is now used across Bangladesh. In 2016, Dr Imran Ahmed's team received the prestigious grant Grand Challenge Exploration (GCE) funded by Bill and Melinda Gates Foundation to address the early diagnosis of hypertensive disorders in pregnancy using continuous measurement techniques.

Randy Duguay, M.Eng, P.Eng and Bruce Matichuk, M.Sc. (Computer Science) are Edmonton-based entrepreneurs who have been working together over the past 6 years to establish early stage companies, focusing on the development of applied AI/ML based applications using different types of sensors and IoT devices. Together they have helped establish two AI/ML product oriented companies – Health Gauge, and AiDANT Intelligent Technology working with graduates from the University of Alberta.

October 25, 2019: Combining Light and Machine Learning to provide insightful predictions
- John Murphy
, Stream Technologies

Abstract: Machine learning models, or Algorithm applications (Aapps), are designed to analyze pixels and spectral features combined with shape information to identify a particular target. The target could be a disease or fungus on a plant, the level of protein in a sample of seeds, nutrients in soil, a deformed object, cancer cells, or other things a business finds value in detecting. Stream Technologies has pioneered and specialized in developing multi-band convolutional neural nets that offer unprecedented accuracy over spatial analytics alone. Their experience in machine learning has grown into an offering (Stream.ML) that can enable the wider use of these technologies.

Bio: John Murphy is a graduate from the University of Alberta, is CEO and co-founder of Stream Technologies Inc., a startup in the photonics / analytics space. John has over 30 years of technology commercialization experience, including founding and growing multiple start-ups. He is active on several advisory boards, Chairman of nanocluster Alberta, is a local angel investor, and was on the founding board of the A100 Organization. John works from Stream Technologies' offices at the Advanced Technology Centre (ATC) in the Edmonton Research Park on the Southside of Edmonton. Through the collaboration and mentorship of John and others at ATC and the UofA, now over half of the Tech companies at the ATC are incorporating AI/ML into their solutions.

October 18, 2019: Efficient Off-policy Estimation in Long-horizon Reinforcement Learning
- Lihong Li,
Google Brain

Abstract: In many real-world applications of reinforcement learning (RL) such as healthcare, dialogue systems and robotics, running a new policy on humans or robots can be costly or risky. This gives rise to the critical need for off-policy estimation, that is, estimate the average reward of a target policy given data collected by another policy in the past. In this talk, we will focus on recent works that directly estimate the importance ratio of states under the *stationary* distributions, made possible by formulating off-policy estimation into a proper optimization problem. Such approaches, unlike typical importance sampling algorithms, can avoid the exponential blowup of variance in long-horizon RL problems, thus can be much more efficient.

Bio: Lihong Li is a research scientist at Google Brain. Previously, he has held research positions in Yahoo! Research and Microsoft Research. His main research interests are in reinforcement learning, including contextual bandits, and other related problems in AI. His work has found applications in recommendation, advertising, Web search and conversation systems. He has served as area chair or senior program committee member at major AI/ML conferences such as AAAI, ICLR, ICML, IJCAI and NeurIPS.

October 11, 2019: A Three-Head Neural Network Architecture for Monte Carlo tree search & Alpha Zero Style Learning
- Chao Gao,
University of Alberta

Abstract: Monte Carlo tree search (MCTS) has been extensively used for two-player games whose state-space can be regarded as an AND/OR graph. The method can flexibly incorporate heuristic knowledge. In AlphaGo and its successors AlphaGo Zero and Alpha Zero, game-specific knowledge is learned and expressed via a two-head network architecture. A policy-head output is used for move selection and a value-head provides move evaluation in MCTS.

We propose a three-head architecture that adds an action-value prediction head to the neural network. Using the game of Hex as a test domain, we empirically verify the merits of this architecture in two scenarios (1) supervised learning, and (2) AlphaZero style reinforcement learning. In (1), we find that the action-value head can achieve similar prediction accuracy as the state-value head with minimal overhead. Using the additional action-value head in MCTS leads to improved playing strength. We also discuss advantages of the new algorithm for delayed node expansion and for augmented training data using a minimax principle. In (2), using the additional action-value head for AlphaZero style training and play also results in improved accuracy of both policy and evaluation compared to the standard approach.

Bio: Chao Gao is a PhD student advised by Martin Mueller and Ryan Hayward. Since 2015, his major interest is developing algorithms for solving and playing the game of Hex, with a particular focus on search algorithms with deep neural networks. He developed MoHex-CNN which became the strongest Hex player in 2017 on 13x13 Hex, and MoHex-3HNN which won 2018 Computer Olympiad competition on both 13x13 and 11x11 Hex by defeating a strong competitor DeepEzo. Collaborated with researchers Pablo Hernandez-Leal and Bilal Kartal at Borealis AI, as an internship work, he developed a player that won 2nd place in the learning category on Pommeran, a multi-agent game competition held at NeurIPS 2018. He currently works as a research intern at Huawei Edmonton

October 4, 2019: Explain Yourself - A Semantic Stack for Artificial Intelligence
- Randy Goebel,
University of Alberta

Abstract: Artificial Intelligence is the pursuit of the science of intelligence. The journey includes everything from formal reasoning, high performance game playing, natural language understanding, and computer vision. Each AI experimental domain is littered along a spectrum of scientific explainability, all the way from high-performance but opaque predictive models, to multi-scale causal models. While the current AI pandemic is preoccupied with human intelligence and primitive unexplainable learning methods, the science of AI requires what all other science requires: accurate explainable causal models. The presentation introduces a sketch of a semantic stack model, which attempts to provide a framework for both scientific understanding and implementation of intelligent systems. A key idea is that intelligence should include an ability to model, predict, and explain application domains, which, for example, would transform purely performance-oriented systems into instructors as well.

Bio: Randy Goebel is currently professor of Computing Science in the Department of Computing Science at the University of Alberta, Associate Vice President (Research) and Associate Vice President (Academic), and Fellow and co-founder of the Alberta Machine Intelligence Institute (AMII). He received the B.Sc. (Computer Science), M.Sc. (Computing Science), and Ph.D. (Computer Science) from the Universities of Regina, Alberta, and British Columbia, respectively. Professor Goebel's theoretical work on abduction, hypothetical reasoning and belief revision is internationally well know, and his recent research is focused on the formalization of visualization and explainable artificial intelligence (XAI). He has been a professor or visiting professor at the University of Waterloo, University of Regina, University of Tokyo, Hokkaido University, Multi-media University (Malaysia), National Institute of Informatics, and a visiting researcher at NICTA (now Data 61) in Australia, and DFKI and VW Data:Lab in Germany. He has worked on optimization, algorithm complexity, systems biology, and natural language processing, including applications in legal reasoning and medical informatics.

September 27, 2019: Stabilizing and enhancing learning for deep complex and real neural networks
- Chiheb Trabelsi,
Element AI

Abstract: At present, the vast majority of building blocks, techniques, and architectures for training deep neural networks are based on real-valued computations and representations. However, representations based on complex numbers have started to receive increased attention. Despite their compelling properties complex-valued deep neural networks have been neglected due in part to the absence of the building blocks required to design and train this type of networks. The lack of such a framework represents a noticeable gap in deep learning tooling

We aim to fill this gap by providing new methods that go far beyond a simple theoretical generalization of real-valued neural networks. More specifically, we develop some key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. Complex convolutions and new algorithms for complex batch-normalization and complex weight initialization strategies form the bedrock of the proposed framework.

We also provide an extension that builds on the results obtained in the first article to provide a novel fully complex-valued pipeline for automatic signal retrieval and signal separation in the frequency domain. We call this model a Deep Complex Separator. It is a novel masking-based method founded on a complex-valued version of Feature-wise Linear Modulation (FiLM). This new masking method is the key approach underlying our proposed speech separation architecture. We also propose a new explicitly phase-aware loss, which is amplitude and shift-invariant, taking into account the complex-valued components of the spectrogram. We compare it to several other phase-aware losses operating in both time and frequency domains.

We also examine issues related to the nature of orthogonal matrices employed to train recurrent neural networks. Unitary transition matrices and orthogonal constraints are shown to provide an appropriate solution to the vanishing and exploding gradients problem in recurrent neural networks. However, imposing hard constraints has been shown to be computationally expensive and needless for some real-world tasks. In this part, we aim at assessing the utility of different types of soft and hard orthogonality constraints by controlling the extent of the deviation of the corresponding transition weight matrix from the manifold of orthogonal transformations and measuring the generalization performance of the associated recurrent neural network.

Bio: Chiheb Trabelsi received his PhD in Computer Engineering from the Quebec Artificial Intelligence Institute (Mila) where he was supervised by Christopher Pal and defended his Thesis in July 2019. He is currently finishing a long-term internship at Element AI. During his PhD he mainly focused on learning hypercomplex-valued representations and how to stabilize the learning process for deep complex and real-valued neural networks. Before that, he received a M.Sc in Computer Science from the University of Montreal where his research was conducted on Statistical Machine Translation into morphologically rich language. Trabelsi obtained his B.Sc in Computer Science and Management from the University of Tunis where he was ranked 2nd among 257 students.

September 20, 2019: Fatou's Lemma for Varying Probabilities and Its Applications to Sequential Decision Making
- Eugene Feinberg,
Stony Brook University

Abstract: This is a high-level overview talk on infinite-horizon Markov decision processes (MDP) with the expected total discounted rewards and with finite state and action sets. The talk focuses on the following three topics: comparisons of major methods for computing optimal policies, applications to other objective functions, and problems with multiple criteria and constraints. The major methods for computing optimal policies are value iterations, policy iterations, and linear programming.

In spite of their simplicity, discounted MDPs are successfully used for studying more complicated criteria and for developing algorithms for their optimization. Such criteria include expected total undiscounted rewards, average rewards per unit time, and weighted discounted rewards. The first two criteria can be approximated by discounted rewards. In addition, under certain conditions, they can be reduced to discounted criteria. The weighted discounted rewards, which are linear combinations of finite numbers of total discounted rewards with different discount factors, can be reduced to discounted ones by enlarging the state space. Though optimal stationary policies may not exist for MDPs with weighted discounted rewards, the appropriate optimal policies can be computed by applying algorithms available for discounted and finite-horizon MDPs.

We shall also discuss constrained discounted MDPs that deal with problems with multiple criteria. In particular, the famous Hamiltonian cycle problem can be modeled by using constrained discounted MDPs.

Bio: Dr. Eugene A. Feinberg is currently the Distinguished Professor in the Department of Applied Mathematics and Statistics at Stony Brook University. He is an expert on applied probability, stochastic models of operations research, Markov decision processes, and on industrial applications of operations research and statistics. He has published more than 150 papers and edited the Handbook of Markov Decision Processes. His research has been supported by NSF, DOE, DOD, NYSTAR (New York State Office of Science, Technology, and Academic Research), NYSERDA (New York State Energy Research and Development Authority) and by industry. He is a Fellow of INFORMS (The Institute for Operations Research and Management Sciences) and has received several awards including 2012 IEEE Charles Hirsh Award for developing and implementing smart grid technologies, 2012 IBM Faculty Award, and 2000 Industrial Associates Award from Northrop Grumman. Dr. Feinberg is an Associate Editor for Mathematics of Operations Research, Stochastic Systems, and Applied Mathematics Letters. He is an Area Editor for Operations Research Letters.

September 13, 2019: Technological Displacement and the Duty to Increase Real Incomes: from Left to Right
- H
oward Nye, University of Alberta

Abstract: Many economists have argued convincingly that automated systems employing present-day, narrow artificial intelligence have caused massive technological displacement, which has led to stagnant real wages, fewer middle-income jobs, and increased economic inequality in developed countries like Canada and the United States. To address this problem various authors have proposed measures to increase workers’ real incomes, including the adoption of a universal basic income, increased public investment in education, increased minimum wages, increased worker control of firms, and investment in a Green New Deal that will provide substantial employment in transitioning to green buildings, agriculture, and energy. In this presentation I argue that both left-wing and right-wing positions in political philosophy, such as Rawls’s Justice as Fairness and Nozick’s Entitlement Theory, are committed to the conclusion that we should take political action to counteract the effects of technological displacement by undertaking such measures to increase workers’ real incomes.

Bio: Howard Nye is an Associate Professor of Philosophy at the University of Alberta. He works primarily in the areas of normative ethics, practical ethics, and metaethics, and has related interests in political philosophy, the philosophy of mind, and decision theory. One line of Howard’s current research concerns the ethics of consumption and ecological preservation, focusing on the argument that individual actors and institutions should reduce their contributions to harmful practices because their contributions have small chances of making very important differences. Another of line of his research investigates what it takes for an entity to have desires and beliefs that represent the world in a sense that admits of genuine, underviative error. A third line of Howard’s research investigates challenges to the common assumption that life is less of a morally important benefit to beings who lack the intellectual abilities of typical human adults.

September 11, 2019: Evolving Recurrent Neural Networks for Emergent Communication
- Joshua Sirota,
University of Alberta

Abstract: Emergent communication is a framework for machine language acquisition that has recently been utilized to train deep neural networks to develop shared languages from scratch and use these languages to communicate and cooperate. Previous work on emergent communication has utilized gradient-based learning. Recent advances in gradient-free evolutionary computation, though, provide an alternative approach for training deep neural networks which could be beneficial for emergent communication. Certain evolutionary algorithms have been shown to be robust to misleading gradients, which can present a problem in cooperative communication tasks. Additionally, some evolutionary algorithms have been shown to train quickly and require only CPUs, rather than the GPUs needed for gradient-based training.

This thesis addresses the question of whether or not a gradient-free evolutionary approach can be used as a training methodology for emergent communication amongst deep neural networks. The evolutionary approach that we use consists of a genetic algorithm to search for both the weights and architectures of these networks. We adapt evolutionary techniques which have previously been used to evolve individual agents so as to co-evolve pairs of agents which develop languages to play a repeated referential game. We empirically demonstrate that agents trained solely with evolution perform well above a random chance baseline, although our performance is worse than that previously achieved with gradient-based reinforcement learning. We show that evolving the architecture of these agents can improve their ability to perform cooperative communication-based tasks when compared to utilization of a fixed, hand-crafted architecture. The main contribution of this thesis is to show that an evolutionary approach can be used to train agents to communicate and suggests that these techniques could be useful for future research on cooperative multi-agent problems involving deep neural networks.

Bio: Joshua Sirota is a master's student at the University of Alberta working on communication and artificial general intelligence with Dr. Vadim Bulitko and Dr. Matthew Brown.

September 6, 2019: Development of the Linac-MR for continuous real-time MR Guided Radiotherapy
- Gino Fallone,
University of Alberta; Cross Cancer Institute

Abstract: The development and clinical potential of the Alberta linac-MR, the Aurora-RT TM is presented. Being the first to merge an MRI to a linac, the laboratory built a further prototype and is installing a third to be used clinically after regulatory clearance. The significantly improved soft-tissue visualization of live cine MR imaging while irradiating the tumour, even if moving due to breathing or other process, offers the potential of significantly improving the tight detection of the tumour and the precise delivery of the therapeutic dose to the tumour whole avoiding surrounding healthy tissues. The Aurora-RT features an isocentrically mounted linac that rotates with a biplanar MR system with the radiation beam's central axis parallel to the MR system's magnetic Bo field. Various AI-based tumour auto-contouring for real-time MR guided RT currently developed will be presented with respect to reconstruction speed, various data-acquisitions and performance. Also discussed is data-acquisition speed- increases achieved with compressed sensing and principal analyses. Novel direct acquisition beam’s eye view MR images during on-line delivery is also introduced and tested. Validation of Monte Carlo depth doses and ion-chamber angular dependences in parallel and perpendicular linac-to-Bo configurations are presented. Extremely accurate deterministic calculations within magnetic fields is presented for parallel and perpendicular configurations, which is verified with GEANT4, but calculated tremendously faster. The Cross Cancer Institute is preparing clinical studies to show the potential of the Aurora-RT in improving outcome for solid tumours, in general, and even, in decreasing the number of treatment sessions for some tumours.

Bio: Fallone is Professor and Director of Medical Physics, Dept of Oncology (University of Alberta) , and Director of Medical Physics (Cross Cancer Institute ). He previously held faculty positions at McGill University and at University of Texas M D. Anderson Cancer Center (Houston). Most recently, he successfully developed the first operational integrated linac-MR system (www.linac- MR.ca), the first whole-body pre-commercial linac-MR system and is presently completing the commercial/linac system. The linac-MR takes continuous MRI images to guide, in real-time, the linac radiation beam to the tumour avoiding healthy tissues. This capability would result in decreased side effects and improved clinical outcome for all cancer tumors treated by radiotherapy (RT), as well as, tumors which currently. Fallone is senior author on over 70 peer-reviewed publications of the physics and engineering on the linac-MR integration. Lifetime, Fallone has, co-authored over 325 peer-reviewed research articles, 330 published abstracts, 315 posters, 272 conference presentations, 130 externally invited conferences, and over 15 patent groups, directly supervised 80 graduate theses, and 25 medical physics residents. He has received numerous international awards, the most notably, being Knighthood to the Order of Merit, Italy for his contributions to cancer research, especially MR-guided Radiotherapy.

August 30, 2019: Brain-Chip interfacing: The future of Modern Neuroscience
- Naweed Syed,
Alberta Children’s Hospital Research Institute (University of Calgary)

Abstract: All brain functions are controlled by networks of neurons that are interconnected by complex circuits of synaptic connections. Perturbation of this connectivity during early development results in neurodevelopmental disorders such as Autism Spectrum, whereas in the adult brain injury, trauma and neuro-developmental disorders render the brain dysfunctional. Because natural replacement of damaged brain tissue seldom occurs, recovering brain function through regeneration alone has met with limited success. Our lab has thus opted to seek engineering solutions whereby we developed various bionic hybrids – thus directly coupling semi-conductor chips with brain cells. This approach has not only allowed us to directly interface brain cells and brain slices to silicon chips in order to monitor the activities of large neuronal ensembles, but also over an extended time period. In addition to exploring the diagnostic potentials of brain-chip interfacing for various brain ailments, this approach now paves the way for developing brain-controlled prosthetic devices and brain-machine interfacing. The future potential of this approach for artificial intelligence, machine learning, robotics etc. will also be explored.

Bio: Dr. Naweed Syed is Professor and Scientific Director of the Alberta Children’s Hospital Research Institute, Cumming School of Medicine, University of Calgary. He was also the Postdoctoral Program Director, Office of the Vice President (Research) from 2012 to 2016, special advisor to the Vice President Research, Chief Scientist at the Constructive Destruction Lab (CDL -Global), and is the Peak Scholar at the University of Calgary. Dr. Syed has been the recipient of many international and national awards including: Alfred P. Sloan Fellowship (USA), Parker B. Francis Fellowship (USA), Alberta Heritage Foundation for Medical Research Scholar, Senior Scholar and Scientist Awards, Canadian Institute for Health Research (CIHR) Investigator Award and the Fellowship of the Royal College of Physicians of Edinburgh. Dr. Syed is also the recipient of Canada-150 Medal by the Senate of Canada. Dr. Syed's team was the first to develop a bionic hybrid which enabled direct dialogue between the brain cells and the silicon chip. This study was highlighted in Time Magazine and on the Discovery Channel, Global and Mail etc. Dr. Syed has published extensively in peer-reviewed scientific journals including Nature, Science, Neuron and the Journal of Neuroscience. He holds multiple research grants from the Canadian Institute of Health Research and the Natural Science and Engineering Council of Canada, serves on several national and international grant panels, and is also a member of several scientific advisory boards.

August 23, 2019: Monte-Carlo Tree Search
- Solomon Eyal Shimony,
Ben Gurion University

Abstract: Monte-Carlo Tree Search (MCTS) is a popular search technique used for games with huge search spaces, especially when lacking good heuristic evaluation functions.

Most MCTS algorithms have a node-selection mechanism (UCT) derived from the UBC1 formula, even though it is well-known that the scenario addressed by the UCB1 scheme is cumulative regret, which is actually irrelevant to game tree search.

Instead, we define batch value of perfect information (BVPI) in game trees as a generalization of value of computation as proposed by Russell and Wefald, and use it for selecting nodes to sample in MCTS. We show that computing the BVPI is NP-hard, but it can be approximated in polynomial time. In addition, we propose methods that intelligently find sets of fringe nodes with high BVPI, and quickly select nodes to sample from these sets. We apply our new BVPI methods to partial game trees, both in a stand-alone set of tests, and as a component of a full MCTS algorithm. Empirical results show that our BVPI methods outperform existing node-selection methods for MCTS in different scenarios.

(This is an expanded version of a UAI-2017 paper, joint work with Shachaf Shperberg and Ariel Felner)

Bio: Professor Solomon Eyal Shimony (BSc in EE, Technion, 1982) works in both theoretical aspects and applications of artificial intelligence. He has a Masters in computer graphics from BGU (1986), and a Ph.D. in artificial intelligence (computer science department, Brown University, 1991) with a dissertation on (probabilistic) abductive reasoning. His current research on probabilistic reasoning includes interest in decision-making under uncertainty, meta-reasoning and flexible computation. and applications thereof in robotics, manufacturing inspection, and AI in games. He has been with the computer science department at BGU since 1991, and ha been serving as an associate editor for IEEE-SMC-B (now IEEE-CYB) since 2001.

August 9, 2019: Network archeology: On revealing the past of random trees
- Gabor Lugosi,
Pompeu Fabra University

Abstract: Networks are often naturally modeled by random processes in which nodes of the network are added one-by-one, according to some random rule. Uniform and preferential attachment trees are among the simplest examples of such dynamically growing networks. The statistical problems we address in this talk regard discovering the past of the network when a present-day snapshots observed. Such problems are sometimes termed "network archeology". We present a few results that show that, even in gigantic networks, a lot of information is preserved from the very early days.

Bio: Gabor Lugosi is ICREA Research Professor at Pompeu Fabra University and Barcelona GSE Research Professor. Professor Lugosi has mostly worked on problems in probability, mathematical statistics, the mathematics of learning theory, information theory, and game theory. His research has been motivated by applications in telecommunications and computer science and also by game-theoretic learning. Recently he has mostly worked on high-dimensional problems in statistics, random graphs, "on-line" learning and sequential optimization, and inequalities in probability theory. He is Associate Editor of the journals Probability Theory and Related Fields and Annals of Applied Probability.

July 26, 2019: From Explainable AI to Human‐Centered AI
- Andreas Holzinger,
Medical University Graz

Abstract: In the medical domain the expectations to automatic AI systems are high, particularly in disciplines requiring prognostic models (oncology) and/or decision support (radiology, pathology). Due to the raising ethical, social, and legal issues governed by the European Union, the field of explainable AI is becoming extremely important. The problem of explainability is as old as AI itself, and classic rule-based approaches have been comprehensible "glass-box" approaches. Nevertheless, their weakness was in dealing with non-linearities and the intrinsic uncertainties of medical data. The progress of probabilistic machine learning, the availability of big data and computational power has made AI successful today, and in certain medical tasks deep learning even exceed human performance. However, such approaches are considered as “black box”- models, and even if we understand the underlying mathematical principles of such models, they still lack explicit declarative knowledge. Consequently, in the future we need context-adaptive procedures, i.e. systems that construct contextual explanatory models for classes of real-world phenomena. One possible step is in linking probabilistic learning methods with large knowledge representations (ontologies), thus allowing to understand how a machine decision has been reached. Our aim is to make machine decisions re-traceable, interpretable and comprehensible. The aim is to explain why a certain machine decision has been reached because the “why” is often more important than the mere classification result. The re-traceability and interpretability on demand shall foster reliability and trust ensuring that the human remains in control, so to augment human intelligence with artificial intelligence and vice versa.

Bio: Andreas Holzinger is lead of the Human-Centered AI Lab (Holzinger Group) at the Medical University Graz and since 2016 he is Visiting Professor for machine learning in health informatics at Vienna University of Technology. Andreas was Visiting Professor for Machine Learning & Knowledge Extraction in Verona; the RWTH Aachen, University College London and Middlesex University London. He serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. He is in the advisory board of the Artificial Intelligence Strategy “AI Made in Germany 2030” of the German Federal Government and in the advisory board of the “Artificial Intelligence Mission Austria 2030”. Andreas Holzinger promotes a synergistic approach to Human-Centred Artificial Intelligence (HC-AI) and has pioneered in interactive machine learning (iML) with the human-in-the-loop. Andreas’ goal is to augment human intelligence with artificial intelligence to help to solve problems in health informatics. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and his second Ph.D. in Computer Science from TU Graz in 2003. He serves as Austrian Representative for AI in IFIP TC 12, he is organizer of the IFIP Cross-Domain Conference “Machine Learning & Knowledge Extraction (CD-MAKE)” and is member of IFIP WG 12.9 Computational Intelligence, the ACM, IEEE, GI, the Austrian Computer Science and the Association for the Advancement of Artificial Intelligence (AAAI). More information: https://www.aholzinger.at

July 19, 2019: Psychopathic Killer Robots! A reasoned examination of A.I. ethics
- Nick Novelli,
University of Edinburgh

Abstract: In science fiction, it's common to see portrayals of artificial intelligences (A.I.) that are deserving of being treated morally, just as humans are. Is such a thing really possible? In moral theory, having certain experiences, such as pleasure, pain, desires, and emotions, is frequently taken to be the criteria for possessing moral standing. But whether an A.I. can feel a certain way is not directly observable. This is a significant practical concern when A.I. are becoming better and better at acting in a way that would tempt people to ascribe such states to them, but without any clear evidence that these states are actually present. As more advanced AIs become part of our daily lives, we will need a principled way to tell if these A.I. actually deserve to be treated as subjects of moral concern.

The Turing Test, which measures an A.I.'s conversational ability, is believed by some to be the best test of an A.I. achieving moral personhood. However, A.I. chatbots have been achieving greater and greater success at the Turing Test, but without giving us any reason to think that they have made any progress towards possessing the capacity to have felt experiences. Furthermore, many entities that do have that capacity, such as pre-linguistic children and non-human animals, have no ability to succeed at a Turing Test. The classic Turing Test is thus at once too easy and too hard to be a test of moral standing.

I will provide some more suitable candidates for behaviours that could not be easily produced in an unfeeling A.I. without desires and emotions, and based on this develop some ideas for a better test of A.I. moral standing.

Bio: Nick Novelli is a doctoral candidate at the University of Edinburgh in Scotland, where he is the deputy director of the Graduate Mind and Cognition Group. He has previously served as president of the University of Manitoba Ethics Centre Student Association. He is the co-founder of the R3T3: Rethinking, Reworking and Revolutionising The Turing Test conference, and has recently edited a special issue of IEEE Technology and Society Magazine on the impacts of technology on governance, politics and democracy. In addition to his work on A.I. moral capacities and moral standing, he has interests in technological ethics generally, in philosophy of music including A.I. musical creativity, and in cyborg-feminist post-genderism.

July 12, 2019: Worst AI project you’ve ever seen
- Alexandre Gosselin,
Tech2.Build

Abstract: I wish we had a clue how to best leverage AI for home builders and trades. What I do have however is the will, funding and countless opportunities to engage via an Edmonton PropTech company, Tech2.Build. Be ready, I’ll be performing the most un-inspiring example you’ve ever seen of AI (a talking bear that attempts to sell you a home) and I’ll show you where we are going next (which is prime with AI opportunity). It will be up to you to tell me where and what I should do next when it comes to AI, if anything at all ;)

Bio: Alexandre Gosselin is the Chief Technology Officer for Tech2.Build, a local an Edmonton PropTech startup pursuing next-level tech software for home builders and trades by 2021. He forged this company from the tech solutions developed within Rohit Group of Companies (a real-estate company from Edmonton) where Tech2.Build is currently commercializing the efforts. Alex believes that true innovation requires personal risk and is motived to do so in the pursuit of the sublime feeling of accomplishment in completing complex solutions. He brings 7+ years of business of startup experience,

10 years of IT leadership roles and has brought 32 software development projects to successful completion. Alex holds a Bachelors of Science with a Major in Software Engineering from the University of Victoria.

June 28, 2019: Reinforcement Learning for Real Life
- Yuxi Li,
attain.ai

Abstract: Reinforcement Learning (RL) has achieved success in many domains such as AlphaGo, Starcraft, and recommendation with contextual bandits. What are the other real life applications? What are the challenges and opportunities? In this talk, I will summarize the talks and themes of the ICML 2019 workshop Reinforcement Learning for Real Life that we organized. Our workshop brought together researchers and practitioners from industry and academia interested in applying RL to real life scenarios. This workshop was one of the most attended at ICML 2019, with more than 550 people filled the room. It was highlighted in Venture Beat's report (https://bit.ly/31IszqZ). I will also provide my viewpoint and prediction on future trend of RL applications. Link to the workshop website: https://sites.google.com/view/RL4RealLife.

Bio: Yuxi Li is the founder of attain.ai. He has more than ten years experience in reinforcement learning, machine learning, and AI. He was a Co-Chair of ICML 2019 Reinforcement Learning for Real Life Workshop. He published the influential article Deep Reinforcement Learning: An Overview on arXiv, which was highly cited. He was a Program Committee Member of AAAI 2019. He was also a co-organizer for AI Frontiers Conference (aifrontiers.com) in Silicon Valley in 2017 and 2018. Yuxi received his PhD in computer science from the University of Alberta.

June 21, 2019: Beyond "Sequence-to-Sequence":
Graphical Modeling and Convolutions for Text Matching and Generation
- Di Niu,
University of Alberta

Abstract: Text matching is the problem of identifying the relationship between two text objects. Text generation is to generate a text object given an input passage or sentence. Both text matching and generation are key to many NLP tasks such as question answering (QA), query-document matching in search, machine reading comprehension (MRC), and chatbots. With the rise of deep neural networks, sequential modeling via recurrent neural networks and LSTMs has been dominant in text encoding. In this talk, I will describe our recent contributions to structural modeling of natural language sentences and documents, especially their graphical modeling, and the use of graph convolutional networks (GCNs) to embed text. By introducing proper graphical modeling for each specific task, we observe remarkable and sometimes even dramatic performance gain on a wide range of tasks, including long document relationship identification, query-document matching, sentence pair matching, question generation in machine reading comprehension, and search query generation and recommendation in search engines. Results of these studies have been widely published in ACL, KDD, WWW, and CIKM, etc., in the recent 2 years.

Bio: Dr. Di Niu is an Associate Professor in the Department of Electrical and Computer Engineering at the University of Alberta. His research interests include NLP and text modeling, knowledge base construction, optimization methods for efficient and private distributed machine learning, statistical learning theory, and networks. He received the B.Eng. from Sun Yat-sen University, Guangzhou, China in 2005 and the MSc and PhD degrees from the University of Toronto, Toronto, Canada, in 2009 and 2013, respectively. He has coauthored and published more than 60 papers in top venues including many top conferences in AI and data mining. He was the winner of the Extraordinary Award (No. 1 out of all 18 grant holders worldwide) of the CCF-Tencent Rhino Bird Open Grant 2016 for his work on news document understanding. His innovations on deep text matching and generation, and knowledge based construction for search engines have been adopted by Tencent, one of the largest IT organizations in China and in the world.

June 14, 2019: An Information Theoretic View on Learning of Artificial Neural Networks
- Rudol
f Mathar

Abstract: Multilayer Artificial Neural Networks (ANNs) have achieved amazing success in a variety of cognitive tasks like pattern recognition, speech analysis, autonomous driving and gaming. Despite intensive research through different disciplines, a satisfactory theory which deeply explains the functionality and performance of ANNs seems to be still in its infancy. In this presentation, we model and analyse the learning process of ANNs by the concept of entropy and mutual information. After having introduced a stochastic model, we observe that training of ANNs is a succession of two phases where first characteristics of unsupervised learning and thereafter supervised learning prevail. Extensive numerical tests support this hypothesis. Theoretical upper and lower bounds for the test error probability are derived, which allow for assessing the progress of training of ANNs. We furthermore develop a model for the case that the training expert is error-prone. Practical experiments demonstrate that corresponding bounds are attained in later phases of the training process.

Bio:RUDOLF MATHAR received the Ph.D. degree from RWTH Aachen University in 1981. He held lecturer positions with Augsburg University and the European Business School. In 1989, he joined the Faculty of Natural Sciences, RWTH Aachen University. He has held the International IBM Chair in computer science with Brussels Free University in 1999. In 2004, he was appointed as the Head of the Institute for Theoretical Information Technology with the Faculty of Electrical Engineering and Information Technology, RWTH Aachen University. Since 1994, he has held visiting professor positions with The University of Melbourne, Canterbury University, Christchurch, and Johns Hopkins University, Baltimore. In 2002, he was a recipient of the prestigious Vodafone Innovation Award. In 2010, he was elected member of the NRW Academy of Sciences and Arts. He is co-founder of three spin-off enterprises. From 2011 to 2014, he served as the Dean of the Faculty of Electrical and Engineering and Information Technology. In 2012, he was elected as Speaker of the Board of Deans, RWTH Aachen University. From 2014 to 2018, he served as Vice-rector for research and structure with RWTH Aachen University. His research interests include information theory, mobile communication systems, particularly optimization, resource allocation, and access control. Recently he has started projects in the area of compressed sensing, data science and machine learning.

June 7, 2019: Deep RL and Auxiliary Tasks for Pommerman
- Pablo Hernandez and Bilal Kartal,
Borealis AI

Abstract: Pommerman is a difficult multi-agent game that was the subject of a NeurIPS-18 contest (also at NeurIPS-19). This talk begins by discussing why the game is very challenging, and the award-winning agent that Chao Gao developed during his internship at Borealis AI. We then talk about three extensions to the deep reinforcement learning agent using auxiliary tasks. In particular, we show improvements to our A3C agent by 1) leveraging a weak UCT agent to provide demonstrations, 2) predicting when an episode will end, and 3) using explicit opponent modelling.

Bio: Pablo Hernandez Leal is a researcher at Borealis AI in Edmonton in the group lead by Matt Taylor. Previously, Pablo was a postdoctoral researcher at CWI, the national research institute for Mathematics and Computer Science of the Netherlands. Pablo is interested in multiagent systems, reinforcement learning and game theory.

May 31, 2019: Clinisys - Building intelligent solutions to predict patient’s health, while strengthening Edmonton's tech entrepreneurial community through collaboration
- Mehadi Sayed,
Clinisys

Abstract: Clinisys EMR Inc. has a dedicated team of professionals in healthcare, IT, and business. Developers of a variety of secure, scalable and user-friendly e-Healthcare solutions for the healthcare industry - Clinisys finds patterns within electronic records to help pre-identify or predict a patient’s health in the future. Clinisys now operates in four provinces, expanding worldwide - with Mehadi building his company by drawing from Edmonton's highly-educated talent pool, and strengthening the tech entrepreneurial community by collaborating with fellow-entrepreneurs, government, industry, and academia.

Bio: Mehadi Sayed is the President and CEO of Edmonton-based Clinisys EMR Inc. Mehadi founded Clinisys in 2011 with a vision to provide intelligent data solutions for the healthcare industry, and the company has expanded into various speciality verticals within the healthcare domain including data analytics, Electronic Medical Record systems and development of sensor-based medical devices. Recently, in a landmark achievement, Clinisys announced a multi-level partnership with Microsoft.

May 24, 2019: Is Separately Modeling Sub-Populations Beneficial for Sequential Decision-Making?
- Ilbin Lee,
University of Alberta

Abstract: In recent applications of Markov decision process (MDP), transition probabilities and rewards are often estimated from large-scale sequential data. In cases where sequences are obtained by simulating a single system, one can safely assume that all sequences follow the same model. However, in health applications, sequential data is collected from a population where each sequence corresponds to a person. Thus, there may be sub-populations that exhibit heterogeneous transition patterns. For example, in a large cohort with a certain disease, there may be patients whose disease status progresses faster than other patients. For such a group, estimating a separate transition probability matrix and applying the corresponding optimal treatment plan can improve their outcomes. In this work, I formally define the benefit of modeling heterogeneity and derive a probabilistic bound on the benefit. The theoretical bound gives us intuition that as the transition models of sub-populations become more “similar” to each other, modeling heterogeneity becomes less beneficial. I present empirical analysis illustrating the theoretical results and show that we need big enough samples to identify the benefit of modeling heterogeneity. I also suggest a method to estimate the benefit of modeling heterogeneity based on bootstrapping and empirically illustrate it.

Bio: Ilbin Lee is an assistant professor in operations management at the University of Alberta School of Business. He was a postdoctoral fellow in the School of Industrial and Systems Engineering at Georgia Institute of Technology. He obtained his PhD in Industrial and Operations Engineering at the University of Michigan in 2015. His research interests include sequential decision-making based on data and prediction, computational optimization, and wildfire and healthcare applications.

May 17, 2019: Theoretical and Empirical Advancements in Computational Lexical Semantics
- Bradley Hauer,
University of Alberta

Abstract: This presentation will summarize two recent studies investigating important problems in computational lexical semantics: most frequent sense detection, and homonymy classification. In the first part, I discuss the use of two sources of semantic information: the most frequently co-occurring words (companions), and the most frequent translation. I present two novel methods that incorporate these concepts and advance the state of the art. In the second part, I propose four hypotheses that characterize the unique behavior of homonyms in the context of translations, discourses, collocations, and sense clusters. The results of the experiments using a new annotated homonym resource provide strong empirical evidence for the hypotheses. This study represents a step towards a computational method for distinguishing between homonymy and polysemy, and constructing a definitive inventory of coarse-grained senses.

Bio: Bradley Hauer is a doctoral student at the University of Alberta Department of Computing Science, working with Greg Kondrak. He has published papers on a wide variety of topics, including generating phonetic spellings of words, identifying translations of words from unstructured text, and analyzing a centuries-old manuscript written in an unknown script. His current research is focused on foundational issues of lexical semantics. His most recent paper was nominated for the IEEE ICSC 2019 Best Paper Award.

May 10, 2019: New Ideas for Any-angle Pathfinding
- Daniel Harabor,
Monash University

Abstract:

Bio:

May 3, 2019: Human Brain Mapping and Decoding
- Muhammad Tony Yousefnezhad,
University of Alberta

Abstract: One of the most significant challenges in our century is comprehending how the human brain works. As an interdisciplinary field of study, computational neuroscience can break neural codes by employing different concepts from mathematics, physics, psychology, psychiatry, and machine learning. In this talk, we focus on developing modern machine learning approaches for analyzing neural activities. We first present the current challenges for decoding the human brain and then introduce novel techniques that can analyze cognitive tasks at different level such as functional alignment, feature selection, and multi-voxel pattern analysis. Further, we will briefly introduce easy fMRI as an open source toolbox for applying brain mapping and decoding tasks.

Bio: Tony (aka Muhammad Yousefnezhad) is a new postdoc fellow at University of Alberta under supervisions of Prof. Russ Greiner (Department Computer Science) and Prof. Andrew Greenshaw (Department of Psychiatry). He received his Ph.D. in the Department of Computer Science and Technology at Nanjing University of Aeronautics and Astronautics (China) in 2018. His primary research interests lie on the intersection of machine learning and computational neuroscience, where he is creating different techniques for decoding patterns of the human brain by exploiting distinctive biomarkers, i.e., fMRI, EEG, MEG, etc. He has also published several related papers in top conferences or journals, such as NIPS, AAAI, SDM, IEEE Transactions on Cybernetics, and so on. He is the founder of easy fMRI project, an open source toolbox for analyzing task-based fMRI datasets.

April 26, 2019: Reachability-Based Robotic Safety and Reinforcement Learning
- Mo Chen,
Simon Fraser University

Abstract: Autonomous systems are becoming pervasive in everyday life, and many of these systems are safety-critical and complex. To provide safety guarantees, formal verification methods such as reachability analysis are needed. However, verification is computationally intractable for complex systems. In the first part of the seminar, I present recent techniques that leverage system structure to make reachability analysis tractable. To perform complex tasks with complex systems, reinforcement learning (RL) has demonstrated great potential. However, performing RL on robotic systems is especially challenging due to the poor sample complexity and generalizability of RL. In the second part of the seminar, I discuss recent advances in effectively incorporating prior knowledge about robotic systems to greatly improve sample complexity.

Bio: Mo Chen is an Assistant Professor in the School of Computing Science at Simon Fraser University, Burnaby, BC, Canada, where he directs the Multi-Agent Robotic Systems Lab. He completed his PhD in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley with Claire Tomlin in 2017, and received his BASc in Engineering Physics from the University of British Columbia in 2011. From 2017 to 2018, Mo was a postdoctoral researcher in the Aeronautics and Astronautics Department in Stanford University with Marco Pavone. His research interests include multi-agent systems, safety-critical systems, reinforcement learning, and human-robot interactions.

April 25, 2019: Nearest Neighbour Search and Generative Modelling
- Ke Li

Abstract:

Bio:

April 24, 2019: Zero-Shot Learning: Generalized Information Adaptation across Classes
- Yuhong Guo,
Carleton University

Abstract:

Bio:

April 23, 2019: Reinforcement Learning and Physical Robots
- Rupam Mahmood
, University of Alberta

Abstract:

Bio:

April 5, 2019: Rule of Law, Rule of Code? Fair Decision-Making Processes in Law and Computer Science
- Jennifer Raso,
University of Alberta

Abstract:

As you read this, algorithmic systems are evaluating who enters Canada, who receives disability benefits, and who is targeted for police scrutiny. These are legal decisions: they require immigration, welfare, and criminal laws to be interpreted and applied to individual cases. As such, they are ruled by “administrative law,” which specifies how officials ought to proceed as they make such decisions. Administrative law assumes that front-line officials reach outcomes independently. Today, however, such decisions are co-created by humans and a widening array of assistive technologies, from case management software to risk prediction matrices. Technology has long influenced how law functions at “street level,” especially for marginalized people. Yet, the newest tools more directly affect legal outcomes. Their objective appearance encourages front-line officials to favour them over other evidence, and officials may have few realistic opportunities to correct software-generated outcomes. In this presentation, I will share some of my research into how humans and technological systems co- produce decisions, and the new questions this arrangement raises for interdisciplinary research in law, sociology, and computer science. In doing so, I will review how developments in algorithmic decision- making systems have been critically evaluated on bias and transparency issues. I will then show how such systems also raise fundamental administrative law issues, specifically when it comes to the processes that are required for an outcome to be considered “fair” in law. This talk will close by offering preliminary reflections on how the legal model of fair processes might resemble, and how it might diverge from, notions of appropriate or fair process in computer science.

Bio:

Dr. Jennifer Raso is an Assistant Professor at the University of Alberta’s Faculty of Law investigating the relationship between discretion, algorithmic systems, and administrative law. She is particularly intrigued by how humans/non-humans collaborate and diverge as they produce institutional decisions, and the consequences of this hybrid arrangement for procedural fairness and substantive justice. This work builds on Dr. Raso’s doctoral research, which included a qualitative socio-legal study of how municipal caseworkers locate and use discretion to deliver the notoriously rule-bound Ontario Works program. An award-winning scholar, her research has been funded by the Social Sciences and Humanities Research Council (Canada) and the Endeavour Fellowships Program (Australia), and recognized by the Canadian Law and Society Association (best article prize, 2018) and the University of Cambridge (Richard Hart Prize, 2016). In 2017-18, she was a postdoctoral fellow at the Allens Hub for Technology, Law and Innovation at UNSW Law School and a visiting researcher at Yale Law School’s Information Society Project. Before pursuing graduate studies, Dr. Raso litigated social welfare, administrative, and human rights matters with the City of Toronto's Legal Services Division. Her scholarship appears in the Canadian Journal of Law & Society, the Journal of Law & Equality, and PoLAR: Political and Legal Anthropology Review.