DESIGN OF HIGH-PERFORMANCE RECEIVERS FOR NEXT GENERATION WIRELESS COMMUNICATION SYSTEMS
The objective of our research is to explore the relation between channel quality and detection performance in multi-input multi-output (MIMO) systems and develop high-performance receivers by improving the channel quality within reasonable complexity. Wireless communications have become a crucial part of our daily life. MIMO technology greatly improves the spectral efficiency and reliability of wireless communication systems. As the demand on spectral efficiency keeps increasing, large MIMO has been proposed for next generation wireless systems, where tens or hundreds of antennas are equipped at either or both ends of the communication link. In such cases, it is critical to design high-performance detectors with affordable complexity.
In our research, we first show with a fixed number of transmit antennas (Nt) that if the number of receive antennas (Nr) exceeds a bound, the channel is of "good" quality for linear detectors to collect the same diversity as that of the maximum likelihood detector in practice. When Nr is close to Nt, such as in highly loaded multiuser (MU) MIMO systems, lattice reduction (LR) algorithms can be used to enhance channel quality and system performance. To understand the theoretical limits of LR-aided detectors, we derive the capacity of LR-aided detectors in MIMO systems. We also investigate the effect of spatial correlation on their complexity through simulations. For MU MIMO uplinks where users employ Alamouti code, we develop LR-aided detectors that utilize the symmetric structure of the equivalent channel. For MU MIMO downlinks, we design LR-aided transceivers to minimize the sum of the mean-squared errors. The effectiveness of our proposed receivers is verified by extensive simulations.
MACHINE LEARNING TECHNIQUES AND DATA MINING FOR COMMUNICATION NETWORKS
This part of our research can be further divided into two parts. The objective of the first part is to improve TCP congestion control (CC) with machine intelligence. In a TCP/IP network, a key to ensure efficient and fair sharing of network resources among its users is the TCP CC scheme. Previously, the design of TCP CC schemes is based on hard-wiring of predefined actions to specific feedback signals from the network. However, as networks become more complex and dynamic, it becomes harder to design the optimal feedback action mapping. In this work, we design two learning-based TCP CC schemes for wired networks with under-buffered bottleneck links, a loss predictor (LP) based TCP CC (LP-TCP), and a reinforcement learning (RL) based TCP CC (RL-TCP). Compared to the existing NewReno and Q-learning based TCP, LP-TCP and RL-TCP both achieve a better tradeoff between throughput and delay, under various simulated network scenarios.
The objective of the second part is to study the human voice and data networks usage patterns through statistical modeling. Call detail records (CDRs) collected at telecom networks have been well studied to reveal human behaviors such as voice service usage, contact regularities and mobility patterns. With the advances in big data technology, carriers can now collect, store and analyze more data, such as records of user mobile web activities, at scales much larger than CDRs. In this work, we study 14 features extracted from both call and web records through statistical modeling. We also decompose the model fitting task into two independent parts: selecting the distribution and estimating the parameters. A classifying engine is proposed to efficiently determine the distribution for a dataset among a set of candidate distributions, after which parameter estimation can be performed. Our approach eliminates the need of estimating parameters for each candidate distribution and reduces complexity significantly.