Workshop Schedule

The IJCAI’20 AI4AN Workshop will be held fully online on 7 January 2021. The full schedule is as follows.

AI4AN-workshop-schedule-.docx

Keynote Talk 1: Progress on Shallow and Deep Anomaly Detection

Abstract:

This talk will discuss three contributions of the Oregon State Robust AI team. First, I will review our ongoing efforts to create a large benchmark collection for featurized data and compare the performance of "shallow" anomaly detection algorithms. Second, I'll discuss our interesting discovery that we can greatly improve the true anomaly detection rate by incrementally incorporating expert feedback. Finally, I'll discuss the challenges of deep anomaly detection and describe our oracle anomaly detection experiments on deep networks for object classification. These experiments show that the learned feature representation of deep networks contains much more information about anomalies than is being extracted by current anomaly scoring methods.

Speaker: Thomas G. Dietterich, Distinguished Professor Emeritus, Oregon State University

Biography: Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor and Director of Intelligent Systems in the School of Electrical Engineering and Computer Science at Oregon State University, where he joined the faculty in 1985. In 1987, he was named a Presidential Young Investigator for the NSF. In 1990, he published, with Dr. Jude Shavlik, the book entitled Readings in Machine Learning, and he also served as the Technical Program Co-Chair of the National Conference on Artificial Intelligence (AAAI-90). From 1992-1998 he held the position of Executive Editor of the journal Machine Learning. The Association for the Advancement of Artificial Intelligence named him a Fellow in 1994, and the Association for Computing Machinery did the same in 2003. In 2000, he co-founded a free electronic journal: The Journal of Machine Learning Research, and he is currently a member of the Editorial Board. Since 2007, he has served as arXiv moderator for Machine Learning. He was Technical Program Chair of the Neural Information Processing Systems (NIPS) conference in 2000 and General Chair in 2001. He is Past-President of the International Machine Learning Society, a member of the IMLS Board, and he also serves on the Advisory Board of the NIPS Foundation. He is President of the Association for the Advancement of Artificial Intelligence.

Keynote Talk 2: Exploring Rare Categories on Graphs: Local vs. Global


Abstract: Rare categories refer to the under-represented minority classes in imbalanced data sets. They are prevalent across many high-impact applications in the security domain where the input data can be represented as graphs. In this talk, I will focus on two complementary strategies for exploring such rare categories -- local vs. global. With the local strategy, the goal is to explore a small neighborhood around a seed node from the rare category for identifying additional rare examples; with the global strategy, the goal is to explore the entire graph in order to identify rare category oriented representations. For each strategy, I will introduce recent techniques proposed from iSAIL Lab (https://isail-laboratory.github.io [isail-laboratory.github.io]). Towards the end, I will also discuss potential future directions on this topic.


Speaker: Jingrui He, Associate Professor, University of Illinois at Urbana-Champaign

Biography: Jingrui He is an associate professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign. She received her PhD in machine learning from Carnegie Mellon University in 2010. Her research focuses on heterogeneous machine learning, rare category analysis, active learning and semi-supervised learning, with applications in social network analysis, healthcare, and manufacturing processes. She is the recipient of the 2016 NSF CAREER Award and a three-time recipient of the IBM Faculty Award, in 2018, 2015 and 2014 respectively, and was selected for an IJCAI 2017 Early Career Spotlight. He has published more than 90 refereed articles, and is the author of the book, Analysis of Rare Categories (Springer-Verlag, 2011). Her papers have been selected as "Best of the Conference" by ICDM 2016, ICDM 2010, and SDM 2010. She has served on the senior program committee/program committee for Knowledge Discovery and Data Mining (KDD), International Joint Conference on Artificial Intelligence (IJCAI), Association for the Advancement of Artificial Intelligence (AAAI), SIAM International Conference on Data Mining (SDM), and International Conference on Machine Learning (ICML).

Keynote Talk 3: Automated Outlier Detection


Abstract: Given a specific outlier detection task with complicated data, the process of building an effective deep learning based system still highly relies on human expertise and laboring trials. While automated machine learning (autoML) has shown its promise in discovering effective deep architectures in various domains, such as image classification, object detection and semantic segmentation, contemporary autoML methods are not suitable for outlier detection due to the lack of intrinsic search space, unstable search process, and low sample efficiency. In this talk, we will cover basic concepts, algorithms and an open-source system design for automated outlier detection. Specifically, we will firstly introduce AutoOD, which aims to search for an optimal neural network model within a predefined search space through curiosity-guided search and self-imitation learning. Then we will present TODS, an automated time series outlier detection system for research and industrial applications.


Speaker: Xia Ben Hu, Associate Professor, Texas A&M University

Biography: Dr. Xia “Ben” Hu is an Associate Professor and Lynn '84 and Bill Crane '83 Faculty Fellow at Texas A&M University in the Department of Computer Science and Engineering. Dr. Hu has published over 100 papers in several major academic venues, including NeurIPS, KDD, WWW, SIGIR, IJCAI, AAAI, etc. An open-source package developed by his group, namely AutoKeras, has become the most used automated deep learning system on Github (with over 7,000 stars and 1,000 forks). Also, his work on deep collaborative filtering, anomaly detection and knowledge graphs have been included in the TensorFlow package, Apple production system and Bing production system, respectively. His papers have received severa Best Paper (Candidate)l awards from venues such as WWW, WSDM and ICDM. He is the recipient of NSF CAREER Award. His work has been cited more than 8,000 times with an h-index of 41. He was the conference General Co-Chair for WSDM 2020. More information can be found at http://faculty.cs.tamu.edu/xiahu/.

Keynote Talk 4: Why is my Entity Typical or Special? Approaches for Inlying and Outlying Aspects Mining


Abstract: When investigating an individual entity, we may wish to identify aspects in which it is usual or unusual compared to other entities. We refer to this as the inlying/outlying aspects mining problem and it is important for comparative analysis and answering questions such as "How is this entity special?" or "How does it coincide or differ from other entities?” Such information could be useful in a disease diagnosis setting (where the individual is a patient) or in an educational setting (where the individual is a student). We examine possible algorithmic approaches to this task and investigate the scalability and effectiveness of these different approaches.


Speaker: James Bailey, Professor, The University of Melbourne

Biography: James Bailey is a Professor in the Melbourne School of Engineering at The University of Melbourne and Program Lead for Artificial Intelligence. He was previously an Australian Research Council Future Fellow and is a researcher in the field of machine learning and artificial intelligence, including interdisciplinary applications and operational frameworks.

His interests particularly relate to the assurance, certification and safety of systems based on machine learning and artificial intelligence. He contributes to the AI research community through roles such as membership of Editorial Boards including the Journal of Artificial Intelligence Research, ACM Transactions on Data Science and IEEE Transactions on Big Data. He was co-Program Chair of the Australasian Joint Conference in Artificial Intelligence in 2019. He works on the deployment of AI systems in collaboration with a wide range of industry and government partners across the defence, energy and health sectors.

Keynote Talk 5: Isolation Distributional Kernel: A New Tool for Point and Group Anomaly Detections


Abstract: Isolation Distributional Kernel is a new way to measure the similarity between two distributions. Existing approaches based on kernel mean embedding, which convert a point kernel to a distributional kernel, have two key issues: the point kernel employed has a feature map with intractable dimensionality; and it is data independent. Isolation Distributional Kernel (IDK), which is based on a data dependent point kernel, addresses both key issues. We demonstrate IDK’s efficacy and efficiency as a new tool for kernel based anomaly detection for both point and group anomalies. Without explicit learning, using IDK alone outperforms existing kernel based point anomaly detector OCSVM and other kernel mean embedding methods that rely on Gaussian kernel. For group anomaly detection, we introduce an IDK based detector called IDK^2. It reformulates the problem of group anomaly detection in input space into the problem of point anomaly detection in Hilbert space, without the need for learning. IDK^2 runs orders of magnitude faster than group anomaly detector OCSMM. We reveal for the first time that an effective kernel based anomaly detector based on kernel mean embedding must employ a characteristic kernel which is data dependent.


Speaker: Kai Ming Ting, Professor, Nanjing University

Biography: After receiving his PhD from the University of Sydney, Australia, Kai Ming Ting worked at the University of Waikato (NZ), Deakin University, Monash University and Federation University in Australia. He joined Nanjing University from January 2020. His current research interests are in the areas of Isolation Kernel, Isolation Distributional Kernel, ensemble approaches, data mining and machine learning. He co-chaired the Pacific-Asia Conference on Knowledge Discovery and Data Mining PAKDD 2008. He has served as a senior member of program committee for AAAI Conference for AI, ACM SIGKDD and PAKDD; and as a member of program committees for a number of international conferences including ACM SIGKDD, IEEE ICDM, ICML and ECML. Research grants received include those from National Natural Science Foundation of China, US Air Force of Scientific Research (AFOSR/AOARD), Australian Research Council, Toyota InfoTechnology Center and Australian Institute of Sport. Awards received include the Runner-up Best Paper Award in 2008 IEEE ICDM, and the Best Paper Award in 2006 PAKDD. He is one of the creators of Isolation Forest, Isolation Kernel, mass estimation and mass-based similarity.

Keynote Talk 6: Deep and Shallow Anomaly Detection: One Class?


Abstract: Anomaly detection is the problem of identifying unusual observations in data. This problem is usually unsupervised and occurs in numerous applications such as industrial fault and damage detection, fraud detection in finance and insurance, intrusion detection in cybersecurity, medical diagnosis and disease detection, or scientific discovery. Many of these applications involve complex data such as images, text, graphs, or biological sequences, that is continually growing in size. This has sparked a great interest in developing deep learning approaches to anomaly detection and led to the introduction of a great variety of new methods.


In this talk, my aim is to provide a systematic and unifying view of anomaly detection methods. In particular, I will show surprising similarities between novel deep and classic 'shallow' methods, but also discern their differences. This discussion will include methods based on reconstruction, generative modeling, and one-class classification, where I will point out common underlying principles. Finally, I will conclude my talk by highlighting some exciting recent developments and potential paths for future research.


Speaker: Marius Kloft, Professor, Technische Universität Kaiserslautern (Presented instead by his PhD student Mr. Lukas Ruff)

Biography: Since 2017 Marius Kloft has been a professor of computer science at TU Kaiserslautern, Germany. Previously, he was an adjunct faculty member of the University of Southern California (09/2018-03/2019), an assistant professor at HU Berlin (2014-2017) and a joint postdoctoral fellow (2012-2014) at the Courant Institute of Mathematical Sciences and Memorial Sloan-Kettering Cancer Center, New York, working with Mehryar Mohri, Corinna Cortes, and Gunnar Rätsch. From 2007-2011, he was a PhD student in the machine learning program of TU Berlin, headed by Klaus-Robert Müller. He was co-advised by Gilles Blanchard and Peter L. Bartlett, whose learning theory group at UC Berkeley he visited from 10/2009 to 10/2010. In 2006, he received a master in mathematics from the University of Marburg with a thesis in algebraic geometry.

Marius Kloft is interested in theory and algorithms of statistical machine learning and its applications, especially in statistical genetics, mechanical engineering, and chemical process engineering. He has been working on, e.g., multiple kernel learning, transfer learning, anomaly detection, extreme classification, and adversarial learning. He co-organized workshops on these topics at NIPS 2010, 2013, 2014, 2017, ICML 2016, and Dagstuhl 2018. His dissertation on Lp-norm multiple kernel learning was nominated by TU Berlin for the Doctoral Dissertation Award of the German Chapter of the ACM (GI). In 2014 he received the Google Most Influential Papers 2013 Award and in 2015 the DFG Emmy-Noether Career Award.

Biography: Lukas Ruff is a third year PhD student in the Machine Learning Group headed by Klaus-Robert Müller at TU Berlin. His research covers robust and trustworthy machine learning, with a specific focus on deep anomaly detection. Lukas received a B.Sc. degree in Mathematical Finance from the University of Konstanz in 2015 and a joint M.Sc. degree in Statistics from HU, TU and FU Berlin in 2017.