Past Seminars: 2020

All available AI Seminar Recordings are posted on the Amii Youtube Channel.
They can also be accessed by clicking on the individual presentation titles below.

Dec 4, 2020: Simpler Models Exist and How Can We Find Them?
-
Cynthia Rudin, Duke University

Abstract: While the trend in machine learning has tended towards more complex hypothesis spaces, it is not clear that this extra complexity is always necessary or helpful for many domains. In particular, models and their predictions are often made easier to understand by adding interpretability constraints. These constraints shrink the hypothesis space; that is, they make the model simpler. Statistical learning theory suggests that generalization may be improved as a result as well. However, adding extra constraints can make optimization (exponentially) harder. For instance it is much easier in practice to create an accurate neural network than an accurate and sparse decision tree. We address the following question: Can we show that a simple-but-accurate machine learning model might exist for our problem, before actually finding it? If the answer is promising, it would then be worthwhile to solve the harder constrained optimization problem to find such a model. In this talk, I present an easy calculation to check for the possibility of a simpler model. This calculation indicates that simpler-but-accurate models do exist in practice more often than you might think. Time-permitting, I will then briefly overview our progress towards the challenging problem of finding optimal sparse decision trees.

Bio: Cynthia Rudin is a professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Prof. Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three-time winner of the INFORMS Innovative Applications in Analytics Award. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics. She was a Thomas Langford Lecturer at Duke University for 2019-2020.

November 27, 2020: An operator view of policy gradient methods
-
Marlos C. Machado, Google Brain

Abstract: We cast policy gradient methods as the repeated application of two operators: a policy improvement operator 𝙸, which maps any policy π to a better one 𝙸π, and a projection operator 𝙿, which finds the best approximation of 𝙸π in the set of realizable policies. We use this framework to introduce operator-based versions of traditional policy gradient methods such as REINFORCE and PPO, which leads to a better understanding of their original counterparts. We also use the understanding we develop of the role of 𝙸 and 𝙿 to propose a new global lower bound of the expected return. This new perspective allows us to further bridge the gap between policy-based and value-based methods, showing how REINFORCE and the Bellman optimality operator, for example, can be seen as two sides of the same coin.

Bio: Marlos C. Machado is a research scientist at Google Brain, Montreal. Marlos received his Ph.D. from the Department of Computing Science at the University of Alberta. His research interests lie broadly in artificial Intelligence and particularly focus on reinforcement learning, including topics like representation learning, generalization, exploration and temporal abstractions.

November 20, 2020: On the complexity of learning good policies with and without rewards
- Emilie Kaufmann,
University of Lille, France

Abstract: This talk will revolve around two performance criteria that have been studied in the context of episodic reinforcement learning: an old one, Best Policy Identification (BPI) [Fiechter, 1994], and a new one, Reward Free Exploration (RFE) [Jin et al., 2020]. We will see that a variant of the very first BPI algorithm can actually be used for the more challenging reward free exploration problem. This Reward-Free UCRL algorithm, which adaptively explores the MDP and adaptively decides when to stop exploration, requires fewer exploration episodes than state-of-the art algorithms. We will then present alternative algorithms for the BPI objective and discuss the relative complexity of BPI and RFE.

Bio: Emilie Kaufmann is a CNRS researcher in the CRIStAL laboratory at University of Lille. She is also a member of the Inria Scool team (formely SequeL), whose expertise is in sequential decision making. She worked a lot on the stochastic multi-armed bandit problem, in particular towards getting a better understanding of the difference between rewards maximization and pure-exploration problems. She also recently worked on exploration for reinforcement learning.

November 13, 2020: Compute & Data at Your Fingertips
-
Masaf Dawood, Sindhu Adini, Chloe Tottem, SpringML & Google Cloud

Abstract: SpringML and Google’s main focus lies in helping researchers in their digital transformations, innovations, reduce costs, expedite, and aid their daily tasks. They would like to share with the audience the power and benefits of having a highly scalable, flexible, and composable research platform, that not only allows you to access data and run models from anywhere but also allows you to use (and not use) compute power for complex analysis. This seminar would allow you to understand how with a central platform for all your data, we can automate the data harmonization process and you can share tools, data, and research findings with your group, collaborators, or the broader research community and use the data combined from various sources to complete several tasks easily and efficiently. We build the research platform and connect the data, integration, analytics, and machine learning into one homogeneous system. With the GCPs suite of services offered, this platform is not a pipe dream, but in fact can be designed and developed rapidly in weeks, not in months. We look forward to discussing this practical and optimal solution with you and how to leverage it to accelerate your research.

Presenter Bios:
Masaf Dawood: Masaf is the Director of Cloud Services for SpringML in Canada. He is a technology leader with over 20 years of progressive experience in technology strategy and management, with a focus on large-scale industrial infrastructure. Proven track record of digital service delivery and program management in diverse industries such as High Technology, Manufacturing, Automotive, Utilities, and Public Sector. He has industrialized complex technologies to accelerate the Client’s Digital Transformation journey from legacy to the leading edge. With his key relationships and identifying and integrating leading-edge technologies, he delivered service excellence to the end customers. He is recognized as an internal expert on multiple technologies and is active within the professional and executive - CIO level communities in Canada and the USA.

Sindhu Adini: Sindhu Adini is the Director of Cloud Services at Spring ML with the primary goal of delivering customer-focused solutions and expansion strategy in the Healthcare and Life Sciences space. She has worked across different domains and technologies in the past and has delivered solutions on AWS and Google, including Precision MEdicine Platforms and Data Commons Platforms - both focused on increasing and improving efficiencies for research organizations.

Chloe Tottem: Chloe is the Higher Education Account Manager at Google Cloud, with a primary focus on helping higher education institutions in Western Canada truly benefit from our tools and services for education and research. Coming from an OpenStack background, Chloe believes in a hybrid-cloud approach and democratizing access to technology, tools, and data to improve and accelerate research outcomes.

August 28, 2020: Representation and General Value Functions
- Craig Sherstan,
Sony AI

Abstract: Research in artificial general intelligence aims to create agents that can learn from their own experience to solve arbitrary tasks in complex and dynamic settings. To do so effectively and efficiently, such an agent must be able to predict how its environment will change both dependently and independently of its own actions. General value functions (GVFs) are one approach to representing such relationships. A single GVF poses a predictive question defined by three components: a behavior (policy), a prediction timescale, and a prediction target (cumulant). Estimated answers to these questions can be learned efficiently from the agent’s own experience using temporal-difference learning methods. The agent’s collection of GVF questions and corresponding answers can be viewed as forming a predictive model of the agent’s interaction with its environment. Ultimately, such a model may enable an agent to understand its environment and make decisions therein.

Bio:Craig Sherstan is currently a Research Scientist at Sony AI, and defending his dissertation as a PhD student in Computing Science at the University of Alberta. He is supervised by Patrick Pilarski as part of the Bionic Limbs for Improved Natural Control (BLINC) Lab and the Reinforcement Learning and Artificial Intelligence Lab (RLAI), and is a Vanier Scholar. Craig’s research has focused on developing agents which can continually and incrementally construct predictive representations of themselves and their world.

July 31, 2020: Leveraging Translations for Word Sense Disambiguation
-
Yixing Luan, University of Alberta

Abstract: Word sense disambiguation (WSD) is one of the core tasks in Natural Language Processing and its objective is to identify the correct sense of a content word in context. Although WSD is a monolingual task, it has been conjectured that multilingual information, e.g. translations, can be helpful. However, existing WSD systems rarely consider multilingual information, and no effective method has been proposed for improving WSD with machine translation. In this work, we propose methods of leveraging translations from multiple languages as a constraint to boost the accuracy of existing WSD systems. To this end, we also develop a novel knowledge-based word alignment algorithm, which outperforms an existing word alignment tool in our intrinsic and extrinsic evaluations. Since our approach is language-independent, we perform WSD experiments on standard benchmark datasets representing several languages. The results demonstrate that our methods can consistently improve the performance of various WSD systems, and obtain state-of-the-art results in both English and multilingual WSD.

Bio: Yixing Luan is a thesis-based M.Sc. candidate in Computing Science at the University of Alberta, supervised by Dr. Greg Kondrak. His research interest includes artificial intelligence and natural language processing with a focus on lexical semantics. He completed his bachelor's degree at Hokkaido University, Japan.

July 10, 2020: Computational Pathology – From Nuclei Segmentation to Precision Oncology
-
Neeraj Kumar, University of Alberta

Abstract: With improvements in computer vision techniques and hardware, some of the problems of manual assessment of histology images, such as inter- and intra-observer variability, inability to assess subtle visual features, and the time taken to examine whole slides are being alleviated by computational pathology. A key module in several computational pathology pipelines is the one that segments nuclei. Accurate nuclei segmentation could facilitate downstream analysis of tissue samples for assessing not only cancer grades or stages but also for predicting tumor recurrence, treatment effectiveness and for quantifying intra-tumor heterogeneity. Identifying different types of nuclei, such as epithelial, neutrophils, lymphocytes, macrophages, etc., could yield information about the host immune response that could advance our understanding of the mechanisms governing treatment resistance and adaptive immunity in cancers of various organs. This talk will give an overview of state-of-the art machine learning algorithms for nuclei segmentation and classification from H&E stained tissue images while providing insights into the process of creating one of the largest nuclei segmentation datasets and organizing two international competitions on this theme. I will also discuss a few nuclei segmentation use-cases including automatic staging of colorectal tumors, prostate cancer recurrence prediction and, intratumor heterogeneity

Bio: Neeraj Kumar just began as a postdoctoral fellow in the Greiner lab, University of Alberta. His research interests include computational pathology, medical image processing, machine learning for healthcare and medicine, and bioinformatics. Previously, he was a research associate at the center for computational imaging and personalized diagnostics at Case Western Reserve University, Cleveland. He completed his Ph.D. in Electronics and Electrical Engineering from the Indian Institute of Technology Guwahati, in 2017. After his Ph.D., he joined as a research fellow in cancer education and career development program of the department of pathology, University of Illinois at Chicago, to address clinically and biologically relevant cancer research questions by capitalizing on his expertise in image processing and machine learning. He was also a visiting researcher at the Institute of Research in Communications and Cybernetics at Ecole Centrale Nantes, Nantes, France, and at Beckman Institute of the University of Illinois at Urbana- Champaign, Champaign. He is a recipient of the numerous prestigious awards including the R25 trainee award from NCI (NIH, USA), Erasmus Mundus Heritage Fellowship, Microsoft Research India PhD fellowship, and the best poster award at University of Illinois Cancer Center’s annual research symposium.

June 19, 2020: Making an Impact? A Tale of Two Projects
- Kevin Leyton-Br
own, University of British Columbia

Abstract: How can AI researchers leverage their specialized knowledge to make a social impact? The notion is beguiling but the reality is complicated. This talk contrasts two strategies that are often employed--loosely described as “write a paper” and “be an entrepreneur”--gained via two, very different projects in electronic market design. The first project focused on developing new theoretical ideas for incentivizing local food pantries to honestly report demand to a centralized food bank. The second project was more practical; it aimed to design an electronic market for agricultural commodities in Uganda that could operate over low-end SMS phones. After discussing technical innovations, lessons learned, and lingering disappointments from both projects, the talk will conclude with some overall thoughts about strategies researchers might employ in pursuit of successful AI for Social Impact projects and how these can be taught in our courses.

Bio: See bio at https://www.cs.ubc.ca/~kevinlb/bio.html

June 12, 2020: Grounding natural language to 3D
- Angel Chang, Simon Fraser University

Abstract: In popular imagination, household robots that we can instruct to "bring me my red mug from the kitchen" or ask “where are my glasses?” are common. For a robot to execute such an instruction or answer such a question, it needs to parse and interpret natural language, understand the 3D environment it is in (e.g. what objects exist and how they are described), navigate to locate the target object, and then formulate an appropriate response. While there has been previous work on the language-to-vision grounding problem in the 2D domain, there is much less work on methods operating with 3D representations such as required by the scenarios in these examples. As a first step in this direction, we introduce the new task of 3D object localization in scenes using natural language descriptions. As input, we assume a point cloud of a scanned 3D scene along with a freeform text description of a specified target object. Through crowdsourcing, we collect a dataset of natural language descriptions of objects in the ScanNet dataset and create a benchmark with several baseline methods for this challenging task of predicting the 3D bounding box of a referred object based on a natural language description. I will conclude by briefly summarizing various other ongoing projects in the area of grounded natural language to 3D interactive environments.

Bio: Dr. Angel Xuan Chang is an Assistant Professor of Computer Science at Simon Fraser University. Dr. Chang’s research focuses on the intersection of natural language understanding, computer graphics, and AI. Her research connects language to 3D representations of shapes and scenes and addresses grounding of language for embodied agents in indoor environments. She has worked on methods for synthesizing 3D scenes and shapes, 3D scene understanding, and helped to create various datasets for 3D deep learning (ShapeNet, ScanNet, Matterport3D). She received her Ph.D. in Computer Science from Stanford, under the supervision of Chris Manning. Dr. Chang received the SGP 2018 Dataset Award for her work on the ShapeNet dataset. She is a recipient of the TUM-IAS Hans Fischer Fellowship and a Canada CIFAR AI Chair.

May 22, 2020: Faster Algorithms for Deep Learning?
-
Mark Schmidt, University of British Columbia

Abstract: The last 10 years have seen a revolution in stochastic gradient methods, with variance-reduced methods like SAG/SVRG provably achieving faster convergence rates than all previous methods. These methods give dramatic speedups in a variety of applications, but have had virtually no impact to the practice of training deep models. We hypothesize that this is due to the over-parameterized nature of modern deep learning models, where the models are so powerful that they could fit every training example with zero error (at least theoretically). Such over-parameterization nullifies the benefits of variance-reduced methods, because in some sense it leads to "easier" optimization problems. In this work, we present algorithms specifically designed for over-parameterized models. This leads to methods that provably achieve Nesterov acceleration, methods that automatically tune the step-size as they learn, and methods that achieve superlinear convergence with second-order information.

Bio: Mark Schmidt is an associate professor in the Department of Computer Science at the University of British Columbia. His research focuses on machine learning and numerical optimization. He is a Canada Research Chair, Alfred P. Sloan Fellow, CIFAR Canada AI Chair with the Alberta Machine Intelligence Institute (Amii), and was awarded the most-recent SIAM/MOS Lagrange Prize in Continuous Optimization with Nicolas Le Roux and Francis Bach.

April 24, 2020: Scaling Probabilistically Safe Learning to Robotics
- Scott Niekum,
UT Austin

Abstract: Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. In recent years, safe reinforcement learning algorithms have enjoyed success in application areas with high-quality models and plentiful data, but robotics remains a challenging domain for scaling up such approaches. Furthermore, very little work has been done on the even more difficult problem of safe imitation learning, in which the demonstrator's reward function is not known. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient off-policy policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards safe robot learning with modest amounts of real-world data.

Bio: Scott Niekum is an Assistant Professor and the director of the Personal Autonomous Robotics Lab (PeARL) in the Department of Computer Science at UT Austin. He is also a core faculty member in the interdepartmental robotics group at UT. Prior to joining UT Austin, Scott was a postdoctoral research fellow at the Carnegie Mellon Robotics Institute and received his Ph.D. from the Department of Computer Science at the University of Massachusetts Amherst. His research interests include imitation learning, reinforcement learning, and robotic manipulation. Scott is a recipient of the 2018 NSF CAREER Award and 2019 AFOSR Young Investigator Award.

Apr 17, 2020: Search-Based Unsupervised Text Generation
- Lili Mou,
University of Alberta

Abstract: In this talk, I will present three recent papers of mine accepted at ACL'20 on search-based unsupervised text generation. Unlike traditional sequence-to-sequence models, we tackle text generation in an unsupervised way without parallel training corpora. This is accomplished by, first, heuristically defining a search objective, involving language fluency, semantic similarity, etc., and then, performing discrete search in the word space. Under this general framework, I will present three applications in text generation, namely, paraphrasing, summarization, and text simplification. I will also discuss drawbacks and potential future work in this direction.

Bio: Dr. Lili Mou is an Assistant Professor at the Department of Computing Science, University of Alberta. Lili received his BS and PhD degrees in 2012 and 2017, respectively, from the School of EECS, Peking University. After that, he worked as a postdoctoral fellow at the University of Waterloo. His research interests include deep learning applied to natural language processing as well as programming language processing. He has publications at top-tier conferences and journals, including AAAI, ACL, CIKM, COLING, EMNLP, ICASSP, ICLR, ICML, IJCAI, INTERSPEECH, NAACL-HLT, and TACL (in alphabetic order). He also has tutorials at EMNLP-IJCNLP'19 and ACL'20.

Apr 3, 2020: How will AI change legal informatics?
- R
andy Goebel, University of Alberta

Abstract: While artificial intelligence holds the promise of providing value in almost every discipline, the two of most immediate value are medicine/health and law. In this presentation, we summarize what some of the motivation and misconceptions of AI applied to law, and provide a glimpse of the work of the XAI lab in navigating paths to creating value from AI in the full spectrum of of the judicial process, from access to justice to support for judgement decisions

Bio: Randy Goebel is currently professor of Computing Science in the Department of Computing Science at the University of Alberta, Associate Vice President (Research) and Associate Vice President (Academic), and Fellow and co-founder of the Alberta Machine Intelligence Institute (AMII). He received the B.Sc. (Computer Science), M.Sc. (Computing Science), and Ph.D. (Computer Science) from the universities of Regina, Alberta, and British Columbia, respectively. Professor Goebel's theoretical work on abduction, hypothetical reasoning and belief revision is internationally well known, and his recent research is focused on the formalization of visualization and explainable artificial intelligence (XAI). He has been a professor or visiting professor at the University of Waterloo, University of Regina, University of Tokyo, Hokkaido University, Multimedia University (Malaysia), National Institute of Informatics, and a visiting researcher at NICTA (now Data 61) in Australia, and DFKI and VW Data:Lab in Germany. He has worked on optimization, algorithm complexity, systems biology, and natural language processing, including applications in legal reasoning and medical informatics.

March 13, 2020: Advances in Probabilistic Generative Models
- Mahdi Karami,
University of Alberta

Abstract: In this seminar, new contributions I made in deep probabilistic generative models will be presented.Normalizing flows can be used to construct high quality generative probabilistic models, but training and sample generation require repeated evaluation of Jacobian determinants and function inverses. To make such computations feasible, current approaches employ highly constrained architectures that produce diagonal, triangular, or low rank Jacobian matrices. As an alternative, we investigate a set of novel normalizing flows based on the circular and symmetric convolutions. We show that these transforms admit efficient Jacobian determinant computation and inverse mapping (deconvolution) in O(N logN) time. Additionally, element-wise multiplication, widely used in normalizing flow architectures, can be combined with these transforms to increase modeling flexibility. We further propose an analytic approach to designing nonlinear element-wise bijectors that induce special properties in the intermediate layers, by implicitly introducing specific regularizers in the loss. We show that these transforms allow more effective normalizing flow models to be developed for generative image models.
In the second part, we propose a deep multi-view generative model that is composed of a linear probabilistic multi-view layer in the latent space in conjunction with deep generative networks as observation models. The variations of each view is captured by a shared latent representation and a set of view-specific factors. The shared latent representation lies in a low dimensional subspace that describes most of the variability (essence) in the multi-view data. To approximate the posterior distribution of the latent probabilistic multi-view layer, a variational inference approach is adopted that results in a scalable algorithm for training deep generative multi-view neural networks. Our empirical studies confirm that the proposed deep generative multiview model can efficiently integrate the relationship between multiple views to alleviate the difficulty of learning.

Bio: Mahdi Karami is a Ph.D. candidate working under the supervision of Dale Schuurmans. He has been working on probabilistic generative models during his PhD and his work on Invertible Convolutional Flow was spotlighted in NeurIPS 2019. His main interests are Deep Generative Models, Representation Learning and Time Series Analysis.

February 28, 2020: New analyses of sampled-based reinforcement learning
- Marc Bellemare,
Google Brain

Abstract: This talk will present two of our recent results regarding sample-based reinforcement learning. The first half is about using distributional tools to analyze the behaviour of sample-based algorithms, such as TD(0) and Q-Learning, when the step-size is kept constant. We find that lifting the analysis to distributions results in significantly simpler proofs, and sheds new insights on the learning process. The second half revisits representation learning in the linear function approximation setting to answer the question: are there representations (i.e., mappings from states to features) which are stable, in the sense that lead to convergent RL even in the presence of the deadly triad? Surveying classic representation learning schemes (proto-value functions, Krylov bases, etc.) we find that not all schemes are created equal with respect to stability. From an auxiliary tasks perspective, we find that simply predicting immediate next features and expected future rewards may actually lead to stable representations -- giving formal evidence of the usefulness of these common heuristics.

Bio: Marc G. Bellemare leads the reinforcement learning efforts at Google Brain in Montreal and holds a Canada CIFAR AI Chair at the Quebec Artificial Intelligence Institute (Mila). He received his Ph.D. from the University of Alberta, where he developed the highly-successful Arcade Learning Environment with Michael Bowling and Joel Veness. He was a research scientist at DeepMind from 2013 to 2017, during which time he made major contributions to deep reinforcement learning, in particular pioneering the distributional method. Marc G. Bellemare is also a CIFAR Learning in Machines & Brains Fellow and an adjunct professor at McGill University.

February 21, 2020: Safe Testing
- Rianne de Heide,
Centrum Wiskunde & Informatica (CWI)

Abstract: We present a new theory of hypothesis testing. The main concept is the S-value, a notion of evidence which, unlike p-values, allows for effortlessly combining evidence from several tests, even in the common scenario where the decision to perform a new test depends on the previous test outcome: safe tests based on S-values generally preserve Type-I error guarantees under such "optional continuation". S-values exist for completely general testing problems with composite null and alternatives. Their prime interpretation is in terms of gambling or investing, each S-value corresponding to a particular investment. Surprisingly, optimal "GROW" S-values, which lead to fastest capital growth, are fully characterized by the joint information projection (JIPr) between the set of all Bayes marginal distributions on H0 and H1. Thus, optimal S-values also have an interpretation as Bayes factors, with priors given by the JIPr. We illustrate the theory using two classical testing scenarios: the one-sample t-test and the 2x2 contingency table. In the t-test setting, GROW s-values correspond to adopting the right Haar prior on the variance, like in Jeffreys' Bayesian t-test. However, unlike Jeffreys', the "default" safe t-test puts a discrete 2-point prior on the effect size, leading to better behavior in terms of statistical power. Sharing Fisherian, Neymanian and Jeffreys-Bayesian interpretations, S-values and safe tests may provide a methodology acceptable to adherents of all three schools.

Bio: You can view Rianne de Heide's bio here: https://homepages.cwi.nl/~heide/

February 7, 2020: Generalizing over Timescale
- Craig Sherstan,
University of Alberta

Abstract: General value functions represent the world as a collection of predictive questions using the semantics of value functions. A fundamental open question with GVFs is determining what questions should be asked. One of the key parameters of a GVF is the prediction timescale. In this talk I will present an approach called Γ-nets which allows a single function approximation network to be trained and queried for arbitrary fixed timescales, removing the need to determine GVF timescale ahead of time. This presentation will be an extended version of the talk I will be presenting at AAAI in New York next week.

Bio: Craig Sherstan is currently a Research Scientist at Sony AI, and defending his dissertation as a PhD student in Computing Science at the University of Alberta. He is supervised by Patrick Pilarski as part of the Bionic Limbs for Improved Natural Control (BLINC) Lab and the Reinforcement Learning and Artificial Intelligence Lab (RLAI), and is a Vanier Scholar. Craig’s research has focused on developing agents which can continually and incrementally construct predictive representations of themselves and their world.

January 31, 2020: EEGEDU - An interactive brain playground and accessible experimental tool
-
Kyle Elliott Mathewson, University of Alberta

Abstract: Over the winter break, my two brothers and I have developed a browser based EEG educational tool at http://eegedu.com. This is an open source project (github.com/kylemath/eegedu) written in javascript which connects with Webbluetooth to muse EEG headsets using only an android phone or Mac computer. Data can be streamed and visualized in a number of ways. A series of modules were developed as an educational tool for PSYCO 375 and other courses, to compliment a new set of 70 Muse headsets the faculty of science has received for teaching. Data can be recorded as well in a number of forms, providing an easy tool for experimental data collection. Brain controlled animations and training machine learning classifiers are also part of the playground. Finally a set of three practice experiments are introduced for classrooms. Our hope is that we can get feedback on this exciting new tool so we can further refine it and expand the utility.

Bio: Dr. Kyle Mathewson is an Associate Professor in the Department of Psychology at the University of Alberta. He is the director of the Attention, Perception, and Performance Lab (APPLab), and an affiliate with the Neuroscience and Mental Health Institute.

January 24, 2020: EEGEDU - AI/ML and Big Data Applications in New Drug Discovery
- Ratmir Derda, University of Alberta

Abstract: 48Hour Discovery Inc. is an University of Alberta-based company - founded in 2017, 48Hour Discovery Inc. aims to facilitate new drug discovery in the pharmaceutical and biotechnology industries with our advanced platform based on patented barcoded libraries, stream-lined workflow and open-access data management. The 48Hour Discovery Cloud houses a database of user curated content of >10,000 screens and provides easy-to-use search functions, data visualization and analysis. Together with their team of Laboratory and Data Scientists, 48 Hour Discovery is continually building new technologies that make ligand discovery faster, easier, and more transparent.

Bio: Dr. Ratmir Derda, CEO, and Associate Professor: Ratmir Derda received his B.Sci. in Physics from Moscow Institute of Physics and Technology in 2001 and Ph.D. in Chemistry from the University of Wisconsin-Madison in 2008. From 2008 to 2011, he was a postdoctoral researcher at Harvard University. Dr. Derda has been recognized for his work in research, business, and mentorship - with recent awards by University of Alberta, and featured as a Top 40 under 40 local leader in Edmonton.

January 17, 2020: IBEX: A framework to improve on A* and IDA*
- Nathan Sturte
vant, University of Alberta

Abstract: We tackle two long-standing problems related to re-expansions in heuristic search algorithms. For graph search, A* can require Ω(2^n) expansions, where n is the number of states within the final f bound. Existing algorithms that address this problem like B and B’ improve this bound to Ω(n^2). For tree search, IDA* can also require Ω(n^2) expansions. We describe a new algorithmic framework that iteratively controls an expansion budget and solution cost limit, giving rise to new graph and tree search algorithms for which the number of expansions is O(n log C*), where C* is the optimal solution cost. Our experiments show that the new algorithms are robust in scenarios where existing algorithms fail. In the case of tree search, our new algorithms have no overhead over IDA* in scenarios to which IDA* is well suited and can therefore be recommended as a general replacement for IDA*.

Bio: Nathan Sturtevant a professor at the University of Alberta, an Amii Fellow, and a Canada CIFAR chair. His research looks broadly at heuristic and combinatorial search problems, including both theoretical and applied approaches. This includes work for single and multiple agents including specialized work involving machine learning, heuristic learning, bidirectional search, cooperative search, adversarial search, large-scale and parallel search, search for game design, abstraction and refinement, and inconsistent heuristics. His research has been implemented in commercial video games and he continues to collaborate with practitioners in the games industry. Nathan received his PhD from UCLA and BSc from UC Berkeley.