Dr. Yuan Tian is an Associate Professor of Electrical and Computer Engineering, Computer Science and of the Institute of Law, Technology, and Public Policy at University of California, Los Angeles. Before joining UCLA, She was an Assistant Professor of Computer Science at University of Virginia, and she obtained her Ph.D from Carnegie Mellon University in 2017, and interned at Microsoft Research, Facebook, and Samsung Research. Her research interests involve security and privacy and its interactions with computer systems, machine learning, and human-computer interaction. Her current research focuses on developing new technologies for protecting user privacy, particularly in the areas of the Internet of Things and machine learning. Her work has generated real-world impact as countermeasures and design changes have been integrated into popular platforms, and also impacted the security recommendations of standard organizations. She is a recipient of Okawa Foundation Award 2022, Google Research Scholar Award 2021, Facebook Faculty Award 2021, NSF CAREER Award 2020, NSF CRII award 2019, Amazon AI Faculty Fellowship 2019. Her research has appeared in top-tier venues in Security, and System. Her projects have been covered by media outlets such as IEEE Spectrum, Forbes, Fortune, Wired, and Telegraph.
Introduction to Trustworthy AI
Abstract: As artificial intelligence (AI) systems such as Large Language Models become increasingly integrated into critical decision-making processes, ensuring their security, privacy, and fairness is essential. This talk will provide an introduction to the fundamental concepts of trustworthy AI, with a focus on key challenges in machine learning (ML) security, the protection of sensitive data, and the promotion of fairness in model outcomes. We will explore the risks posed by adversarial attacks, data leakage, and biased models, while also discussing emerging approaches to mitigate these issues and build more robust, ethical AI systems. We will also show examples of cutting-edge research in this area.
Dr. Myung (Michael) Cho is an Assistant Professor in the Department of Electrical and Computer Engineering at California State University, Northridge (CSUN). He received his B.S. in Electrical and Computer Engineering from Hanyang University, Seoul, Korea in 2007, and Ph.D. degree in Electrical and Computer Engineering from the University of Iowa in 2017.
His research interest is optimization for distributed systems, which includes signal processing in distributed systems, optimal control for distributed systems, and distributed machine learning process. He currently leads the MOSAIC (Machine learning/Optimization/Signal processing/AI/Combination) Lab at CSUN, where he is trying to encompass ML, AI, signal processing, and optimization to create synergies by integrating these diverse fields and perspectives.
Tree Network Design for Faster Distributed Machine Learning Process with Distributed Dual Coordinate Ascent
Abstract: This project focuses on optimizing the efficiency of distributed machine learning (ML) by designing an optimal tree-based network structure. In the distributed machine learning process, coordinating across multiple nodes can become a bottleneck due to communication delays and network latency. To address this challenge, the project proposes a hierarchical, tree-shaped architecture that minimizes communication costs between nodes, making the distributed learning process faster and more scalable. The core of the methodology involves using Distributed Dual Coordinate Ascent on a Tree network (DDCA-Tree), an algorithm well-suited for large-scale machine learning tasks. DDCA-Tree breaks down complex optimization problems into smaller, manageable subproblems that can be solved independently by nodes in a distributed system on any tree network. The proposed tree network design complements this approach by ensuring that efficient communication flow between nodes, further accelerating convergence rates. This approach is particularly useful for large-scale applications such as big data analytics, neural networks, and other machine learning algorithms where data is distributed across multiple machines. By leveraging the tree structure and DDCA, this project aims to enhance the scalability, speed, and overall performance of distributed machine learning processes.