Burchard Building 213
Stevens Institute of Technology
Hoboken, NJ, USA
Email: jli148[AT]stevens[DOT]com
I am open to new academic and research opportunities—please don’t hesitate to get in touch.
I am currently a Ph.D. candidate in the Department of Electrical & Computer Engineering at Stevens Institute of Technology and my research interest is Applied Cryptography. I got my master's degree in Business Intelligence and Technology from Stevens in May 2019. Before I came to Stevens, I received my Bachelor's degree in Applied Mathematics from South China University of Technology in 2017.
My research lies in applied cryptography especially the optimization and implementation of Lattice-based Cryptography. Recently, I have been working on lattice-based ABE and its application. In general, I have a broad interest in privacy-related computing techniques including Homomorphic Encryption, Multiparty Computation, etc.
CS513 Data Mining (Teaching Assistant): 2018 Fall
CPE691 Information Secuity (Guest Speaker): 2020 Srping, 2024 Fall
CPE695 Applied Machine Learning (Instructor): Since 2023 Fall
Retrival Augmented Generation (Co-Lecturer): 2024 Summer Mini Course
Privacy-Preserving Machine Learning (Co-Lecturer, work with York College): 2025 Summer Mini Course
Research Project
As deep learning models become increasingly complex, executing inference locally on resource-constrained devices such as IoT sensors, drones, or embedded systems has become increasingly impractical. Offloading computation to nearby edge servers or cloud servers offers an attractive solution, but it also raises critical privacy concerns: The data used for inference often contain sensitive personal or contextual information and transmitting them to an untrusted server may expose users to data leakage.
To address this problem, I introduced a practical framework for secure deep neural networks inference outsourcing [1] that protect input confidentiality while maintaining real-time performance by eliminating the reliance on heavy cryptographic tools such as homomorphic encryption or multi-party computation. The key insight is to separate neural computations into linear and non-linear parts: linear layers (e.g., convolutional and fully connected) dominate the computational cost and can be securely outsourced, while non-linear activations remain local. This design leverages the algebraic linearity of neural networks to achieve both efficiency and privacy.
To realize this idea, I developed an interactive Privacy-Preserving Scalar Product (iPPSP) Evaluation primitive that enables secure linear computation outsourcing using lightweight one-time-pad encryption. The protocol ensures confidentiality under standard cryptographic assumptions, requires minimal computation on the client side, and remains compatible with existing deep-learning frameworks and GPU acceleration. This work is validated through extensive experiments using IoT devices such as Raspberry Pi and drones. More importantly, because the protocol relies only on linearity, it generalizes naturally to diverse neural architectures such as CNNs, RNNs, and transformers, making it great potential to be broadly applicable across edge, IoT, and distributed learning settings.
Federated learning (FL) enables multiple participants to collaboratively train machine-learning models without centralizing their raw data, offering a promising foundation for privacy-preserving analytics across organizations and devices. However, by keeping training local, FL trades data confidentiality for a new vulnerability: integrity violations. Malicious or lazy participants can submit falsified model updates without performing proper training, degrading model quality and undermining trust. This risk is amplified in cross-silo and IoT deployments, where devices operate under heterogeneous conditions and may not be fully trusted. My research addresses this issue by leveraging Trusted Execution Environments (TEEs) to ensure both verifiable training integrity and computational efficiency under a zero-trust assumption. I made the following main contributions:
A. Integrity Verification for Federated Learning [2]: To ensure the integrity of training participants in Federated Learning, I take advantage of TEEs to design a sampling-based retraining verification protocol The solution randomly selects a subset of the training rounds to be reproduced inside TEE, allowing the server to verify whether participants have executed legitimate training on their committed data. Furthermore, we incorporate the core idea of the secure offloading techniques from my previous work. Specifically, I introduce a partial training offloading scheme that allows the secure enclaves to offload the linear operations to co-located GPUs, protected by lightweight OTP encryptions and pseudorandom permutation. This design significantly improves the scalability by eliminating the need to perform the entire training process within TEEs. In addition, the proposed framework removes the requirement for all participants to possess TEEs, as only the verifier operates within a trusted environment. This flexibility broadens the applicability of the framework to diverse federated learning settings, making it practical for large-scale and heterogeneous deployments.
B. Accumulator-based Integrity Verification for Federated Learning [3]: While previous work smartly addresses the integrity concern by leveraging the sampling-based retraining strategies with secure offloading techniques to unleash TEEs, I further advance integrity assurance by eliminating the need for retraining. I design a lightweight accumulator that records cryptographic commitments of the intermediate gradients throughout the local training process. Instead of retraining, in this design, the verifier validates if these accumulators are equal to the local model updates. This commitment-based mechanism significantly reduces computational overhead while maintaining verifiable correctness. By combining TEE-assisted attestation with cryptographic accumulation, the framework achieves efficient, privacy-preserving verification suitable for resource-constrained and sensitive domains such as healthcare.
In this project, we aim to utilize the efficient ring LWE version to improve the efficiency of multi-authority ABE that achieves decentralization. We design a RLWE-based MA-ABE protocol and prove its security under selective security model. The analysis shows our protocol improve the efficiency by a factor of N^2 compared to other lattice-based MA-ABE with same functionalities. And preliminary results demonstrate an encryption and decryption time of 28.60 and 15.71 seconds respectively when $N=1024$, thereby showing the feasibility of our construction in real-world applications.
We are currently working on the implementation of our protocol and plan to release the codes as an open-source library on Github.
International Conference Publications:
C.1 Li, J., & Yu, S. (2024). Efficient multi-authority ABE from learning with errors over rings. In Proceedings of the IEEE Military Communications Conference (MILCOM 2024) (pp. 963–968). IEEE. https://doi.org/10.1109/MILCOM61039.2024.10773690
C.2 Li, J., Chen, N., Yu, S., & Srivatanakul, T. (2024). Efficient and privacy-preserving integrity verification for federated learning with TEEs. In Proceedings of the IEEE Military Communications Conference (MILCOM 2024) (pp. 999–1004). IEEE. https://doi.org/10.1109/MILCOM61039.2024.10773815
C.3 Li, J., & Yu, S. (2024). Integrity verifiable privacy-preserving federated learning for Healthcare-IoT. In Proceedings of the IEEE International Conference on E-health Networking, Application & Services (HealthCom 2024) (pp. 1–6). IEEE.
C.4 Guo, R., Li, J., & Yu, S. (2024). GridSE: Towards practical secure geographic search via prefix symmetric searchable encryption. In Proceedings of the 33rd USENIX Security Symposium (USENIX Security 2024).
Referred Journal Publications:
J.1 Li, J., Zhang, Z., Yu, S., & Yuan, J. (2022). Improved secure deep neural network inference offloading with privacy-preserving scalar product evaluation for edge computing. Applied Sciences, 12(18), 9010. https://doi.org/10.3390/app12189010
J.2 Zhang, Z., Li, J., Yu, S., & Makaya, C. (2023). Safelearning: Enable backdoor detectability in federated learning with secure aggregation. IEEE Transactions on Information Forensics and Security, 18, 3289–3304. https://doi.org/10.1109/TIFS.2023.3280032