Ongoing Projects
Ongoing Projects
Development of an Agentic AI-Driven Context-Aware Autonomous Network Management System
에이전틱 AI 기반 상황 인지 자율 네트워크 관리 시스템 개발, 연구책임자, 우수연구-세종과학펠로우십 (국내트랙), 한국연구재단, 1억/년, 2026.03. - 2031.02.
Completed Projects
Major interest topics I currently focusing on can be found below.
Monitoring and Management for Network Systems
Scalable and Autonomous In-Band Network Telemetry
In band network telemetry (INT) provides fine grained visibility into network states in real time, but its inherent design of embedding telemetry data into packet headers introduces substantial overhead, making it challenging to sustain accurate and efficient monitoring under dynamic environments. Rather than addressing this problem in an isolated manner, our line of work aims to fundamentally rethink INT as a self adaptive system through a series of efforts on frequency aware measurement, freshness aware data collection, and intent driven control. Through these studies, we progressively move beyond static and heuristic based approaches toward a unified vision of closed-loop network monitoring optimization, where monitoring is continuously refined based on observed network conditions and high level objectives. The ultimate goal is to build an autonomous INT system that can intelligently decide what information to collect, how much to collect, and when to collect it, while continuously correcting its behavior through feedback. Such a system enables a principled balance between monitoring accuracy and overhead, maintains the relevance of collected data over time, and adapts seamlessly to evolving network dynamics and operator intents, thereby achieving robust and efficient network monitoring without manual intervention.
Related Publications
FreshINT: Freshness-aware Early Reporting in In-band Network Telemetry Systems (IEEE INFOCOM'26 Workshop)
FAT-INT: Frequency-Aware and Item-Wise In-band Network Telemetry for Low-Overhead and Accurate Measurement (ACM CoNEXT'25)
Auto-INT: Intent-Aware and LLM-Empowered Automatic Sampling Optimization for In-band Network Telemetry (ACM CoNEXT 25 Poster)
Intelligence Processing in Network Systems
Low-overhead and Resilient In-Network Intelligence
In network inference over programmable data planes enables fast and low overhead execution of deep neural networks directly within the network, but its practical deployment introduces a fundamental challenge of sustaining efficiency and reliability under dynamic environments. Existing approaches often rely on distributing submodels across multiple network devices to mitigate computation and memory constraints, which can increase network traffic due to long forwarding paths and the transmission of intermediate data, while also remaining vulnerable to performance degradation when data distributions change over time. Rather than addressing these challenges in isolation, our line of work aims to establish a unified framework for traffic efficient and adaptive in network intelligence through a series of efforts on adaptive early exit based inference, feature importance-aware model optimization, and in network drift detection. Through these studies, we move toward a holistic vision in which inference is continuously optimized and validated within the network. The ultimate goal is to build an autonomous in network intelligence system that can intelligently determine when to terminate computation early, how to support multi-tenancy, and how to detect and respond to distribution shifts in real time, thereby achieving a principled balance between inference accuracy, network overhead, and system robustness in highly dynamic network environments.
Related Publications
LUCID: Lightweight Unsupervised In-Network Drift Detection and Selective Update Framework (IEEE SECON'26)
MALOI: Multi-Task-Aware Low-Overhead In-Network Inference using Programmable Switch (IEEE ComMag'25)
TINIEE: Traffic-Aware Adaptive In-Network Intelligence via Early-Exit Strategy (IEEE SECON'24, under review on IEEE ToN)
Data Aggregation in Distributed Systems
Orchestration for Distributed In-Network Aggregators
Distributed machine learning offers scalability by distributing computation across multiple nodes, but often faces communication bottlenecks due to the need for frequent gradient exchanges. In-network aggregation, enabled by programmable data planes, has emerged as a promising solution by allowing gradient aggregation to occur directly within the network—reducing traffic and accelerating training. However, deploying these aggregation functions across the entire network is non-trivial, as programmable switches have limited resources. This project investigates how to strategically deploy in-network aggregation in multi-tenant environments, aiming to minimize overall network traffic while respecting hardware constraints. Our goal is to make distributed machine learning not only faster, but also more efficient and scalable within real-world network infrastructures.
Related Publications
Traffic- and Multi-Tenancy-Aware In-Network Aggregation Placement for Distributed Machine Learning (IEEE ICCCN 2023, under review on IEEE TNSM)