Tutorials

Tutorial 1 (Palm)

Sunday  1 :00 PM - 2:30 PM



An Introduction to Neural Networks

David Bisant, Central Security Service 
Abstract:  This tutorial will cover an introduction to modern neural networks and how they are applied to problems in artificial intelligence. Basic terminology, history, application methods, and application case studies will be covered. Modern topics such as deep learning will be covered. The material will be filtered and summarized for the novice. Further reading and software packages and frameworks will also be discussed.  
Bio: Dr. David Bisant received his B.S from Colorado State University, his.M.S. from the University of Maryland, and his PhD from George Washington University.  He conducted his postdoctoral research at Stanford University in the laboratory of David Rummelhart.  He has held positions at Medtronic, the US Department of Defense, Stanford University, and the Lab for Physical Sciences. He is currently a senior scientist at Central Security Service.  Dr. Bisant has over 25 years experience in applying neural networks to problems in biology, signals analysis, and cybersecurity.

Tutorial 2  (Bay)

Sunday  1 :00 PM - 2:30 PM



Graph Neural Networks for Link Prediction

Alina Lazar, Youngstown State University 
Abstract: Graph Neural Networks (GNNs) are considered a subset of deep learning methods designed to extract important information and make useful predictions on graph representations. Researchers have been working to adapt neural networks to operate on graph data for more than a decade. Most practical applications come from the areas of physics simulations, object detection and recommendation systems. Given the extended application areas, GNNs are one of fastest growing and most active research topics, that attracts increasing attention not only from the machine learning and data science community, but from the larger scientific community as well. The materialsfor this tutorial will be selected and organized for researchers with no prior knowledge of GNNs. Further reading, applications and the most popular software packages and frameworks will also be discussed. 
Bio: Dr. Alina Lazar received her PhD from Wayne State University, and she is a Professor in the School of Computer Science, Information and Engineering Technology at Youngstown State University, and a faculty affiliate at Lawrence Berkeley National Lab. Her research interests include machine learning and data science. Lately, she has been working on applying machine learning algorithms to large scientific datasets, networking, and software engineering. She is also interested in adapting learning algorithms to scale well to deal with large datasets, missing values, and noise. Dr. Lazar has been teaching database and machine learning courses at both undergraduate and graduate levels. She enjoys working with talented undergraduate students on multidisciplinary research projects. 

Tutorial 3  (Palm)

Sunday  3:00 PM - 4:30 PM

Presentation Slides

Topological Data Analysis in Natural Language Processing

Wlodek Zadrozny, UNC Charlotte 


Abstract: Topological Data Analysis (TDA) introduces methods that capture the underlying structure of shapes in data. Within the last two decades, TDA has been mostly examined in unsupervised machine learning tasks. TDA has been often considered an alternative to the conventional algorithms due to its capability to deal with high-dimensional data in different tasks including but not limited to clustering, dimensionality reduction or descriptive modeling. This tutorial will focus on applications of topological data analysis to text data. After introducing the fundamentals, we will show ways in which topological information can be applied to example natural language processing (NLP) tasks, leading to new insights or improved accuracy. Examples include classification, sentence acceptability judgments, the structure of word embeddings, comparisons of writing styles, summarization, and others, such as fraud detection. 
Bio: Dr. Wlodek Zadrozny joined the faculty of the University of North Carolina at Charlotte in 2013, after a 27 year career at the IBM T.J. Watson Research Center. Dr. Zadrozny is Professor of Computer Science and Professor of Data Science at UNC Charlotte. His research focuses on natural language understanding and its applications. At IBM, from 2008 to 2013, Dr. Zadrozny was a member of the Watson project, the Jeopardy! playing machine, and subsequently a recipient of the 2013 AAAI Feigenbaum Prize for his contributions to the project. As a scientist at IBM Research, he led and contributed to a range of projects, including semantic search, natural language dialogue systems, and a value net analysis of intangible assets. Dr. Zadrozny published about a hundred refereed papers on various aspects of text processing and was granted sixty patents.