PhD Research AssistantDepartment of ECE, The Ohio State University Email: wang.7691 "at" osu "dot" edu
2015 Neil Avenue, Columbus, OH 43210, USA |

**About Me**

I will join the facebook as research scientist in 2019.

I am currently a 4th year Ph.D student at Department of Electrical and Computer Engineering, the Ohio State University, advised by Prof. Ness Shroff. Before that, I spent two wonderful years in Shanghai Jiaotong University under the supervision of Prof. Hui Liu and Prof. Xinbing Wang. I am interested in solving the challenging problems via non-trivial theoretical developments. I received the Kenneth C. Sevcik Outstanding Paper Award at ACM SIGMETRICS 2017, and the best paper award at IEEE ICNC 2015.

Check out my Google Scholar if you are interested.In large-scale distributed machine learning systems, the performance gain from training across multiple machines is always limited by the "stragglers" (many machines waiting for the single slowest machine to finish a given phase of computation). Recently, coding techniques have shown to provide an effective way to deal with the "straggler" in the distributed computation tasks. Our works focus on the following key aspects:

My research lies in the broad area of applied probability, theoretical optimization and information theory, with applications in algorithm design in machine learning, distributed network control, game theory.

**Optimization and Applications**

I am interested in theoretical analysis and design of optimization algorithm, especially the Alternating Directional Method of Multiplier (ADMM), and the applications in machine learning and distributed network optimization.

- Faster algorithm for linear programming:

- Distributed network optimization: Network utility maximization is the core in distributed network control, which leads to many classic network control mechanism such as TCP/AQM. There exists a long standing open questions in this area that can we design an algorithm simultaneously achieves fast convergence speed, optimal utility, low latency and low complexity. Our work answers this open question by designing completely new algorithm. [paper]

**Coding Distributed Machine Learning**

- Fundamental limits of coded linear transform:

- Data sparsity in coded computation: In modern machine learning, the most data sets are extremely large-scale and sparse. Existing coded computation schemes will create a significant overhead- "coding straggler" on these data sets. To address this problem, we design a new coded computation strategy with low-density generator matrix. Moreover, this is the first code to simultaneously achieve optimum recovery threshold, low-density and decoding time linear in number of nonzero elements in final output. [paper]

**Security Game Theory **

Recently, computation game theory has been widely used in cyber-physical security such as including US Coast Guard and Federal Air Marshals Service (FAMS), Transportation System Administration, and even in the wildlife protection. For example, the IRIS system in Federal Air Marshal service is using the Stackelberg game model to predict the adversarial behavior and allocate the Air Marshals to protect the flights. Our work provides a unified theoretical framework to design and analyze the security game model, and provides more efficient (polynomial time) algorithm to determine the equilibrium point. [paper1][paper2]