The goal of a quantum network is to generate and distribute quantum information/resources to end nodes in a robust manner. Throughout our investigation into quantum networks, we have encountered a diverse set of intriguing challenges. These span from fundamental concerns such as losses and noise during transmission and storage to more extensive architectural developments. We have tackled these challenges by designing performance benchmarks, architectures, and resource allocation policies in several of our recent research projects as listed below.
Capacity of a quantum switch is the set of request rate vectors that the switch can stably support with finite request serving latencies. In our work, we have characterized the capacity regions of the switch for different quantum resource distribution architectures. We have adapted the rich set of classical switch scheduling policies to the quantum setting, and introduced the max weight scheduling policy for quantum systems that achieve this capacity under noise. However, unlike in classical setting, resource distribution in quantum enforces a limit on how long they last. We have comprehensively modeled this unique characteristic into the capacity computation. Additionally, we have introduced an yield function to capture the effectiveness of the noise correction process. The conclusions obtained in this work yield useful guidelines for subsequent quantum switch designs. For more details, see Infocom (2023)
Consider a scenario in which the demand for quantum resource varies with time. In times/periods of low demand, the network can serve demands and any excess resources is used to create and store new resources in quantum storage servers. Later, in times of high demands, demands can be served either through network or through storage servers. In our work, we have introduced, for-the-first time, a quantum storage network to serve user demands for quantum resources. Please visit Infocom (2023) for more details.
Satellite-based quantum communication (SQC) offers a more favorable alternative to direct terrestrial communication via optical fibers due to significantly lower quantum information loss in free-space. While much of the state-of-the-art in SQC focuses on single satellite setups, network setups present their own challenges, including scheduling and resource management. In our work, we have addressed some of these concerns by characterizing the optimal satellite-to-ground station transmission scheduling policy while considering various resource constraints at both the satellites and ground stations. Furthermore, leveraging the dynamics of the free space channel to our advantage, we have showcased a performance improvement for quantum cryptographic applications through meticulous selection of classical post-processing strategies. Check NetSCIQCOM (2022) and Arxiv (2023) for more details.
My research on classical networks has involved the creation of diverse algorithms, encompassing both distributed and centralized approaches, catering to distinct performance requirements of cloud, IoT, and caching networks. Some contributions from this thread of work include the following.
Credit: ece.ohio-state
How do we allocate resources to users in a geographically dispersed network? Conventional resource allocation policies in data centers, IoT, and cloud networks prioritize distributing user requests evenly among computing resources, often neglecting the influence of user and resource spatial characteristics due to its complex nature. We have developed theoretical frameworks for spatial resource allocation, that significantly reduce the implementation cost associated with moving a request to/from its allocated resource. We have further derived closed-form expressions for the expected implementation cost for specific user-resource spatial distributions. For more detais, see TOMPECS (2022) .
Caching systems have long been crucial for improving the performance of numerous network and web-based online applications. Despite the development of many caching policies, the question of ‘what is the fundamental limit of caching’ remains unanswered. To overcome this challenge, we have proposed a simple Hazard Rate (HR) based proactive cache resource allocation rule to compute an upper bound on the hit rate of non-anticipative caching policies, under weak statistical assumptions on the content request process (CRP). The proposed HR bound is tighter than state-of-the-art upper bounds, such as those produced by B´el´ady’s algorithm for a number of CRPs. Please see TOMPECS (2022) for more details.