Workshop Program
Workshop Program
CEFSW Workshop will be held on Dec 3, 2024.
Location: TBD
This is a half-day workshop and the program outline is as follows:
09.00 - 09.15 Opening Remarks, by Shengyu Zhang
09.15 - 10.00 Paper Presentation:
An Adaptive Aggregation Method for Federated Learning via Meta Controller, by Tao Shen
DHelper: A Collaborative Toolkit for Manuscript Restoration, by Yue Han/Yuqing Zhang
Distributed Optimization over Block-Cyclic Data, by Yucheng Ding/Chaoyue Niu
10.00 - 10.30 Invited Talk 1: Towards Industrial Large Models and Digital Twins, by Jiehan Zhou
10.30 - 10.45 Coffee Break
10.45 - 11.15 Invited Talk 2: Beyond Language: Revisiting ASR for Future Challenges, by Sheng Li
11.15 - 11.45 Invited Talk 3 (Online): Fine-grained Action Analysis for Human Behavior Understanding, by Jinglin Xu
11.45 - 12.15 Invited Talk 4 (Online): Heterogeneity-aware Personalized Federated Learning via Adaptive Dual-Agent Reinforcement Learning, by Ting Wang
Invited Talk Information
Towards Industrial Large Models and Digital Twins
Abstract: This report presents a comprehensive exploration of advancements in industrial large models and digital twin technologies led by the CogTwins Lab at Shandong University of Science and Technology (SDUST). The research addresses critical areas in intelligent manufacturing, emphasizing equipment fault prediction, prognostics health monitoring, and process optimization through multimodal data integration.
Bio:Dr Jiehan Zhou is a Professor and Director of International Education and Research Programs at the School of Computer Science at Shandong University of Science and Technology and a Docent in the Department of Electronics and Information Engineering at the University of Oulu, Finland. He has a rich academic background and professional experience, having obtained a Ph.D. in Mechanical Automation from Huazhong University of Science and Technology in 2000 and a Ph.D. in Computer Engineering from the University of Oulu, Finland in 2011. He has worked at renowned research institutes and universities such as Tsinghua University, VTT Technical Research Centre of Finland, INRIA in France, Luxembourg Institute of Science and Technology (LIST), University of Oulu, and University of Toronto, Canada with over 20 years of international research experience across three continents. He has held positions such as Senior Scientist, Team Leader, Laboratory Director, General Manager of an Industrial Internet company in Germany, and Principal Engineer at a Research Institute on Cloud Computing in Canada. He also serves as a visiting professor at Huazhong University of Science and Technology. His research falls in the fields of Industrial Large Models and Industrial Digital Twins. He has published more than 150 technical papers in intelligent manufacturing, system modeling and simulation, industrial internet.
Beyond Language: Revisiting ASR for Future Challenges
Abstract:Automatic speech recognition (ASR) transforms spoken audio into word, subword, or character sequences, serving as one of the most intuitive human-machine interfaces. It plays a vital role in complex tasks like speech-to-speech translation and robotic dialogue.
With advances in deep neural networks, particularly large self-attention models, ASR accuracy has seen substantial gains. Despite this progress, ASR is far from being solved. Developers continue to face significant challenges, particularly in supporting low-resourced languages. Additionally, widespread adversarial attacks and data security concerns pose serious obstacles for real-world applications.
This talk will explore our research efforts to address these challenges, focusing on low-resourced multilingual modeling, enhancing security, and expanding ASR's capabilities to critical areas such as disordered speech, Alzheimer's detection, and beyond traditional language applications.
Bio:Sheng Li received his BS and ME degrees in 2006 and 2009 from Nanjing University, Nanjing, China, and his Ph.D. from Kyoto University, Kyoto, Japan, in 2016. From 2009 to 2012, he worked at the joint lab of the Chinese University Hong Kong and Shenzhen City, researching speech technology-assisted language learning. From 2016 to 2017, he worked as a researcher at Kyoto University, studying speech recognition systems for humanoid robots. In 2017, he joined the National Institute of Information and Communications Technology, Kyoto, Japan, as a researcher working on speech recognition.
Fine-grained Action Analysis for Human Behavior Understanding
Abstract:
Fine-grained action analysis aims to achieve fine-grained action recognition, fine-grained action localization, 3D human pose estimation, and fine-grained action quality assessment, which are widely applied in intelligent security, healthcare, sports, and media fields. This talk focuses on fine-grained spatial-temporal action localization, 3D human pose estimation, and fine-grained action quality assessment, answering how to localize fine-grained actions with fuzzy boundaries in time and space, estimate human poses with uncertain depth in 3D space, and more accurately assess human action quality from a fine-grained perspective. These works will help the development of sports, rehabilitation training, physical fitness testing, and digital media.
Bio:
Jinglin Xu is now an Associate Professor at the School of Intelligence Science and Technology at the University of Science and Technology Beijing (USTB). Before joining USTB, she was a Postdoctoral Fellow in the Department of Automation at Tsinghua University. She received her Ph.D. degree at Northwestern Polytechnical University. Her research interests include computer vision, video understanding, and fine-grained action analysis, where she has authored 20 papers in top-tier journals and conference proceedings.
Heterogeneity-aware Personalized Federated Learning via Adaptive Dual-Agent Reinforcement Learning
Abstract:Federated Learning (FL) empowers multiple clients to collaboratively train machine learning models without sharing local data, making it highly applicable in heterogeneous Internet of Things (IoT) environments. However, intrinsic heterogeneity in clients' model architectures and computing capabilities often results in model accuracy loss and the intractable straggler problem, which significantly impairs training effectiveness. To tackle these challenges, this work proposes a novel Heterogeneity-aware Personalized Federated Learning method, named HAPFL, via multi-level Reinforcement Learning (RL) mechanisms. HAPFL optimizes the training process by incorporating three strategic components: 1) An RL-based heterogeneous model allocation mechanism. The parameter server employs a Proximal Policy Optimization (PPO)-based RL agent to adaptively allocate appropriately sized, differentiated models to clients based on their performance, effectively mitigating performance disparities. 2) An RL-based training intensity adjustment scheme. The parameter server leverages another PPO-based RL agent to dynamically fine-tune the training intensity for each client to further enhance training efficiency and reduce straggling latency. 3) A knowledge distillation-based mutual learning mechanism. Each client deploys both a heterogeneous local model and a homogeneous lightweight model named LiteModel, where these models undergo mutual learning through knowledge distillation. This uniform LiteModel plays a pivotal role in aggregating and sharing global knowledge, significantly enhancing the effectiveness of personalized local training. Experimental results across multiple benchmark datasets demonstrate that HAPFL not only achieves high accuracy but also substantially reduces the overall training time by 20.9%-40.4% and decreases straggling latency by 19.0%-48.0% compared to existing solutions.
Bio: Ting Wang received the Ph.D. degree from Hong Kong University of Science and Technology, Hong Kong, China, in 2015. He is currently an associate professor with the Software Engineering Institute, East China Normal University, Shanghai, China. Prior to joining ECNU in 2020, he worked at the Bell Labs as a research scientist from 2015 to 2016, and at Huawei as a senior engineer from 2016 to 2020. His research interests include cloud/edge computing, federated learning, and distributed machine learning.