(V1)
Ensuring provable privacy can only be done by cryptographic protocols, but it is hindered by the required computational overhead. While recent years have witnessed drastic improvements in runtime for provable (i.e., cryptographically secure) privacy-preserving computing (P^3C), maintaining acceptable runtime and low overhead in practical applications remains a challenge. To ensure practicability and wide-scale usage, there is a dire need to bridge the gap between the P^3C and plaintext computation in terms of runtime, while defining the applicability of P^3C protocols. Although statistical methods can be utilized to anonymize data with more manageable overhead, they are limited in probability of privacy and have been shown to be vulnerable to attack. ACES Lab is focused on building state-of-the-art privacy-preserving systems at the intersection of multiple cutting-edge domains, reaching the overhead that is achieved in statistical methods while still ensuring provable privacy. The most prominent works in this research thrust have utilized prominent privacy-preserving techniques, such as multi-party computation and fully homomorphic encryption, to ensure practical, provable privacy in end-to-end systems.
Our recent prominent research has primarily focused on Zero-Knowledge Proofs (ZKPs). ZKPs are a set of cryptographic primitives that allows a prover to convince a verifier that an evaluation of computation f on P’s private input w, also called the witness, is correct without revealing anything about w. ZKPs have limitless potential, however, their presence in mainstream applications has been primarily limited to the blockchain. While there have been recent works that apply ZKPs to different domains, the general perception of ZKPs is that their home is on the blockchain. Our work aims to change this perception by building novel end-to-end systems that secure learning paradigms and other real-world applications with ZKPs. To ensure practicality in our end-to-end systems, we also conduct extensive research on hardware/software co-design of ZKP operations on reconfigurable hardware.
(V2):
Ensuring provable privacy can only be done by cryptographic protocols, but it is often hindered by significant computational overhead. While recent years have witnessed drastic improvements in runtime for provable and cryptographically secure privacy-preserving computing (P^3C), maintaining acceptable performance in practical applications remains a challenge. To enable real-world applications, there is a dire need to bridge the gap between P^3C and plaintext computation while defining the boundaries of protocol applicability. Although statistical methods can anonymize data with more manageable overhead, they offer limited privacy guarantees and remain vulnerable to sophisticated attacks.
ACES Lab is focused on building state-of-the-art privacy-preserving systems at the intersection of multiple cutting-edge domains, reaching the efficiency of statistical methods while maintaining the rigor of provable privacy. Our most prominent works utilize techniques such as Multi-Party Computation (MPC) and Fully Homomorphic Encryption (FHE) to ensure practical, end-to-end security.
Our recent research has primarily pivoted toward Zero-Knowledge Proofs (ZKPs). ZKPs allow a prover (P) to convince a verifier (V) that a computation f on a private witness w is correct without revealing any information about w. While the potential of ZKPs is limitless, their widespread adoption has been largely relegated to the blockchain. Our work aims to move ZKPs beyond the ledger by securing complex learning paradigms and real-world AI applications. To achieve this, we have focused on three separate research pillars:
HW/SW co-design: We conduct research on custom hardware acceleration. This includes the development of specialized accelerators designed to handle ZK friendly hash functions (e.g. Reinforced Concrete, Griffin, Rescue-Prime), significantly reducing the bottleneck of proof generation.
Verifiable Learning: We design cryptographically secure and verifiably robust training algorithms for several emerging learning paradigms, such as federated learning and split learning.
Model integrity & ownership: We utilize ZKPs to establish ownership of neural networks and the integrity of their outputs, providing cryptographically secure and robust watermarks that allow creators to protect IP and prove model provenance in the age of open source AI.
Our current thrust lies in investigating methods to push the boundaries of scalable ZKPs for machine learning. More specifically, we are exploring different methods of model manipulation techniques to partition massive LLMs across distributed environments, such as in compute centers. By pursuing parallelized sharding strategies, we aim to drastically decrease proof generation times for the next generation of massive scale verifiable AI.