In short, we are dedicated to developing innovative methodologies that orchestrate 6G-and-beyond intelligent network systems. Our works focus on exploiting a diverse array of mathematical techniques to create fundamental and analytically grounded solutions that optimize the utilization, transfer, and processing of data within complex and heterogeneous network environments.
Our lab's major goal is to bridge the gap between advanced mathematical models and theories and their application to emerging areas in wireless networks and AI/ML. These theories include, but are not limited to, convex and non-convex optimization (such as geometric programming), convergence analysis of convex and non-convex loss functions, network optimization and operational research, linear algebra, matrix theory, random processes, and stochastic analysis.
Development of new algorithms and mathematical frameworks for distributed machine learning particularly federated learning: This research area focuses on creating innovative algorithms and mathematical models that enable distributed machine learning, with a particular emphasis on federated learning. Federated learning allows multiple devices or entities to collaboratively train a shared model without exchanging their data, thus preserving privacy. The challenge lies in ensuring that these methods not only converge to accurate solutions efficiently but also remain robust when deployed in real-world wireless network environments, where conditions can be unpredictable and vary significantly.
Development of new convergence analysis techniques and mathematical tools to study the characteristics of distributed/federated machine learning: This research area is dedicated to developing new convergence analysis techniques and mathematical tools that provide deeper insights into the characteristics of federated learning algorithms. By analyzing the convergence properties, researchers can predict the speed, accuracy, and stability of learning processes, allowing for the optimization and fine-tuning of these methods in practical applications, such as in mobile or edge networks where computational resources are limited.
Development of new resource allocation and network orchestration techniques for distributed/federated learning methods: This research area focuses on designing novel techniques that optimize how resources are allocated across the network to support these learning methods. The goal is to ensure that the learning processes can run smoothly and efficiently, even in challenging conditions, by dynamically managing network resources and coordinating the learning activities across distributed nodes. This is particularly important in applications such as edge computing and Internet of Things (IoT) networks, where multiple devices must work together to achieve a common learning objective.
Cloud/Edge Computing: This involves optimizing how tasks are allocated and managed in cloud and edge computing environments, with a focus on resource allocation, load balancing, and economic considerations. These optimizations are essential for handling the massive data and computational demands of modern wireless networks.
Spreading Processes over Networks: This area explores the detection and mitigation of spreading phenomena, such as viruses or anomalies, over networks. It includes designing robust network structures that can withstand cascading failures, which is vital for maintaining the integrity of interconnected systems.
Vehicular Ad-hoc Networks (VANETs): Research here focuses on optimizing the performance of vehicular networks, particularly in how resources are allocated and tasks are processed within vehicular clouds. Economic analysis plays a significant role in understanding and improving the efficiency and viability of these networks.
Task Scheduling and Resource Management over Dynamic Networks: This involves creating algorithms for effectively scheduling tasks and managing resources in highly dynamic environments, such as vehicular networks and mobile crowdsensing networks. The goal is to ensure that these networks operate efficiently, even under varying conditions and with limited resources.
Efficient Information Spreading and Interference Management: This area focuses on strategies for disseminating information across networks in a way that maximizes reach and impact while minimizing interference. This is particularly important in densely connected or contested environments, such as urban wireless networks.
Distributed Computation over Wireless Fog and Edge Networks: Research here explores how to distribute computational tasks across a network in a way that optimizes performance while reducing latency and energy consumption. This is critical for applications that require real-time processing and decision-making, such as autonomous vehicles or smart cities.
Reinforcement Learning-Based Control and Optimization: This area leverages reinforcement learning to develop adaptive control systems for wireless networks. These systems can learn and evolve over time, optimizing network performance dynamically in response to changing conditions.
Privacy-Preserving and Decentralized Machine Learning: This research focuses on developing machine learning methods that protect user privacy and operate effectively in decentralized environments, such as edge computing. This is increasingly important as more data is generated and processed at the network’s edge.
Application of Advanced AI Techniques: Here, the focus is on using advanced AI techniques, like graph neural networks, to tackle complex network optimization problems, such as task scheduling and resource management. These methods offer powerful tools for managing the intricate dependencies and dynamics of modern wireless networks.