CMU Cylab CRYPTO SEMINAR

Sponsored by SCRF

The CMU CyLab Crypto Seminar Sponsored by Smart Contract Research Forum is an informal seminar series at CMU. The seminar meets every Thursday at 4:30 pm ET. Currently, the seminar is in hybrid mode. We will meet in person and the talks will be streamed live via Zoom. The recordings will be posted on the CMU Crypto Youtube Channel.

If you are interested in giving a talk or more information, please contact Ke, Lisa, or João.

If you would like to receive the notifications for coming talks, please subscribe to the crypto-announce mailing list. Also, you can follow @cmucrypto on Twitter to get more information!

We thank SCRF and CyLab for generously sponsoring this seminar series.

Coming Talks

  • Spiral: Fast, High-Rate Single-Server PIR via FHE Composition

Samir Menon

Jul 7, 2022

In private information retrieval (PIR), a client wants to retrieve a record from a database server without revealing to the server which record they retrieved. In the single-server setting we examine, the server should not learn this information even if it acts maliciously. In this work, we show how to compose lattice-based homomorphic encryption schemes to achieve significant improvements in the communication and computational costs of PIR. We introduce new ciphertext translation techniques to convert between Regev and Gentry-Sahai-Waters encodings, an improved modulus switching routine, and an automatic parameter selection system. A Wikipedia demo of our work is available at https://spiralwiki.com.


Across a broad range of database configurations, the basic version of Spiral simultaneously achieves at least a 4.5x reduction in query size, 1.5x reduction in response size, and 2x increase in server throughput compared to previous systems. A variant of our scheme, SpiralStreamPack, is optimized for the streaming setting and achieves a server throughput of 1.9 GB/s for databases with over a million records (compared to 200 MB/s for previous systems) and a rate of 0.81 (compared to 0.24 for previous systems). For streaming large records (e.g., a private video stream), we estimate the monetary cost of SpiralStreamPack to be only 1.9x greater than that of the no-privacy baseline where the client directly downloads the desired record.


This is joint work with David Wu.

  • Unclonable Polymers and Their Cryptographic Applications

Ghada Almashaqbeh

Jul 14, 2022

We propose a mechanism for generating and manipulating protein polymers to obtain a new type of consumable storage that exhibits intriguing cryptographic “self-destruct” properties, assuming the hardness of certain polymer-sequencing problems.

To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled secure vault where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) one time programs, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input.

Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.

This is a joint work with Ran Canetti, Yaniv Erlich, Jonathan Gershoni, Tal Malkin, Itsik Pe’er, Anna Roitburd-Berman & Eran Tromer

  • Unclonable Polymers and Their Cryptographic Applications

Saar

Jul 14, 2022

We propose a mechanism for generating and manipulating protein polymers to obtain a new type of consumable storage that exhibits intriguing cryptographic “self-destruct” properties, assuming the hardness of certain polymer-sequencing problems.

To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled secure vault where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) one time programs, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input.

Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.

This is a joint work with Ran Canetti, Yaniv Erlich, Jonathan Gershoni, Tal Malkin, Itsik Pe’er, Anna Roitburd-Berman & Eran Tromer

  • TBA

Vitaly Feldman

Jul 28, 2022

TBA

  • TBA

Lalita Devadas

Aug 4, 2022

TBA

  • TBA

Mark Simkin

Aug 11, 2022

TBA

  • TBA

Jonathan Bootle

Aug 25, 2022

TBA

  • Single-Server Private Information Retrieval with Sublinear Amortized Time

Alexandra M Henzinger

Sep 15, 2022

This talk will present new private-information-retrieval protocols in the single-server setting. These schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size.


Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small “hint” about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time — yielding sublinear amortized cost.


Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.


This talk is based on joint work with Henry Corrigan-Gibbs and Dmitry Kogan.

  • Zero-Knowledge IOPs with Linear-Time Prover and Polylogarithmic-Time Verifier

Siqi Liu

TBA

Improving the running time of the prover is a central goal in the area of succinct arguments. In this talk we will trace through a line of works [BCGGHJ17, BCG20, BCL20] that successfully construct succinct arguments that have linear-time provers and are also zero-knowledge. The result is a direct consequence of a new interactive oracle proof (IOP) that achieves linear-time proving, polylogarithmic verification, and zero knowledge. We will focus on the construction of this IOP in this talk. This is based on joint work with Alessandro Chiesa and Jonathan Bootle.

Previous Talks

  • Gemini: elastic SNARKs for diverse environments

Yuncong Hu

Recording

We introduce a new class of succinct arguments, that we call elastic. Elastic SNARKs allow the prover to allocate different resources (such as memory and time) depending on the execution environment and the statement to prove. The resulting output is independent of the prover’s configuration. To study elastic SNARKs, we extend the streaming paradigm of [Block et al., TCC’20]. We provide a definitional framework for elastic polynomial interactive oracle proofs for R1CS instances and design a compiler which transforms an elastic PIOP into a preprocessing argument system that supports streaming or random access to its inputs. Depending on the configuration, the prover will choose different trade-offs for time (either linear, or quasilinear) and memory (either linear, or logarithmic).

We prove the existence of elastic SNARKS by presenting Gemini, a novel FFT-free preprocessing argument. We prove its security and develop a proof-of-concept implementation in Rust based on the arkworks framework. We provide benchmarks for large R1CS instances of tens of billions of gates on a single machine.

  • Aggregation with Shuffle Differential Privacy

Pasin Manurangsi

Recording

Differential privacy (DP) is a formal notion of privacy for algorithms. Traditionally, the study of DP focused on two main models: the central model and the local model. The former requires a trusted curator but provides good utility guarantees, whereas the latter does not require any trusted curator but incurs larger errors. In recent years, the shuffle model of DP has emerged as an intermediate option--requiring less trust than the central model but yielding better accuracy than the local model. In this talk, I will describe a few protocols for aggregation in the shuffle model which achieves similar accuracy guarantees as in the central model while also having small communication overhead.


Based on joint work with Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Amer Sinha and Ameya Velingker

  • Quartz: Superoptimization of Quantum Circuits

Mingkuan Xu

Recording

Existing quantum compilers optimize quantum circuits by applying circuit transformations designed by experts. This approach requires significant manual effort to design and implement circuit transformations for different quantum devices, which use different gate sets, and can miss optimizations that are hard to find manually. We propose Quartz, a quantum circuit superoptimizer that automatically generates and verifies circuit transformations for arbitrary quantum gate sets. For a given gate set, Quartz generates candidate circuit transformations by systematically exploring small circuits and verifies the discovered transformations using an automated theorem prover. To optimize a quantum circuit, Quartz uses a cost-based backtracking search that applies the verified transformations to the circuit. Our evaluation on three popular gate sets shows that Quartz can effectively generate and verify transformations for different gate sets. The generated transformations cover manually designed transformations used by existing optimizers and also include new transformations. Quartz is therefore able to optimize a broad range of circuits for diverse gate sets, outperforming or matching the performance of hand-tuned circuit optimizers.

  • Spreading the Privacy Blanket: Differentially Oblivious Shuffling for Differential Privacy

Mingyu Liang

Recording

In the shuffle model for differential privacy, n users locally randomize their data and submit the results to a trusted ``shuffler'' who mixes the results before sending them to a server for analysis. This is a promising model for real-world applications of differential privacy, as several recent results have shown that, in some cases, the shuffle model offers a strictly better privacy/utility tradeoff than what is possible in a purely local model.


A downside of the shuffle model is its reliance on a trusted shuffler, and it is natural to try to replace this with a distributed shuffling protocol run by the users themselves. While it would of course be possible to use a fully secure shuffling protocol, one might hope to instead use a more-efficient protocol having weaker security guarantees.


In our work, we consider a relaxation of secure shuffling called differential obliviousness that we prove suffices for differential privacy in the shuffle model. We also propose a differentially oblivious shuffling protocol based on onion routing that requires only O(n \log n) communication while tolerating any constant fraction of corrupted users. We show that for practical settings of the parameters, our protocol outperforms existing solutions to the problem.


Link:

https://eprint.iacr.org/2021/1257

  • Limits of Preprocessing for Single-Server PIR

Kevin Yeo

Recording

We present a lower bound for the static cryptographic data structure problem of single-server private information retrieval (PIR). PIR considers the setting where a server holds a database of n entries and a client wishes to privately retrieve the i-th entry without revealing the index i to the server. In our work, we focus on PIR with preprocessing where an r-bit hint may be computed in a preprocessing stage and stored by the server to be used to perform private queries in expected time t. We consider the public preprocessing setting of Beimel et al. [JoC, 2004] where the hint is publicly available to everyone including the adversary.


We prove that for any single-server computationally secure PIR with preprocessing it must be that tr = Ω(n log n) when r = Ω(log n). If r = O(log n), we show that t = Ω(n). Our lower bound holds even when the scheme errs with probability 1/n2 and the adversary’s distinguishing advantage is 1/n. Our work improves upon the tr = Ω(n) lower bound of Beimel et al. [JoC, 2004]. We prove our lower bound in a variant of the cell probe model where only accesses to the memory are charged cost and computation and accesses to the hint are free. Our main technical contribution is a novel use of the cell sampling technique (also known as the incompressibility technique) used to obtain lower bounds on data structures. In previous works, this technique only leveraged the correctness guarantees to prove lower bounds even when used for cryptographic primitives. Our work combines the cell sampling technique with the privacy guarantees of PIR to construct a powerful, polynomial-time adversary that is critical to proving our higher lower bounds.

  • Modular Design of Secure Group Messaging Protocols and the Security of MLS

Yiannis Tselekounis

Recording

Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage.


SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm in particular and SM as a cryptographic primitive in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group, and we formally capture its exact security as a so-called continuous group key agreement (CGKA) protocol. Furthermore, we formally capture the security of full SGM protocols by defining a corresponding security game, which is parametrized by a safety predicate that characterizes the exact level of security achieved by a construction. Then, we

cast MLS as an SGM protocol, showing how to modularly build it from the following three main components (and some additional standard cryptographic primitives) in a black-box fashion: (a) CGKA, (b) forward-secure group AEAD (FS-GAEAD), which is a new primitive and roughly corresponds to an "epoch" of group messaging, and (c) a so-called PRF-PRNG, which is a two-input hash function that is a pseudorandom function (resp. generator with input) in its first (resp. second) input.

  • No Time to Hash: On Super-Efficient Entropy Accumulation

Zhiye Xie

Recording

Real-world random number generators (RNGs) cannot afford to use (slow) cryptographic hashing every time they refresh their state $R$ with a new entropic input $X$. Instead, they use ``superefficient'' simple entropy-accumulation procedures, such as

\[

R \gets \rot_{\alpha, n}(R) \oplus X,

\]

where $\rot_{\alpha,n}$ rotates an $n$-bit state $R$ by some fixed number $\alpha$. For example, Microsoft's RNG uses $\alpha=5$ for $n=32$ and $\alpha=19$ for $n=64$. Where do these numbers come from? Are they good choices? Should rotation be replaced by a better permutation $\pi$ of the input bits?


In this work we initiate a rigorous study of these pragmatic questions, by modeling the sequence of

successive entropic inputs $X_1,X_2,\ldots$ as \emph{independent} (but otherwise adversarial) samples from some natural distribution family ${\cal D}$.

Our contribution is as follows.

  • We define 2-monotone distributions as a rich family ${\cal D}$ that includes relevant real-world distributions (Gaussian, exponential, etc.), but avoids trivial impossibility results.

  • For any $\alpha$ with $\gcd(\alpha,n)=1$, we show that rotation accumulates $\Omega(n)$ bits of entropy from $n$ independent samples $X_1,\ldots,X_n$ from any (unknown) $2$-monotone distribution with entropy $k > 1$.

  • However, we also show some choices of $\alpha$ perform much better than others for a given $n$. E.g., we show $\alpha=19$ is one of the best choices for $n=64$; in contrast, $\alpha=5$ is good, but generally worse than $\alpha=7$, for $n=32$.

  • More generally, given a permutation $\pi$ and $k\ge 1$, we define a simple parameter, the covering number $C_{\pi,k}$, and show that it characterizes the number of steps before the rule $$(R_1,\ldots,R_n)\gets (R_{\pi(1)},\ldots, R_{\pi(n)})\oplus X$$ accumulates nearly $n$ bits of entropy from independent, $2$-monotone samples of min-entropy $k$ each.

  • We build a simple permutation $\pi^*$, which achieves nearly optimal $C_{\pi^*,k}\approx n/k$ for all values of $k$ simultaneously, and experimentally validate that it compares favorably with all rotations $\rot_{\alpha,n}$.

  • Incompressible Cryptography

Jiaxin Guan

Recording

Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety.


We give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.


https://eprint.iacr.org/2021/1679

  • Batch OT with Optimal Rate

Pedro Branco

Recording

In this talk, we show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019).


To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).


Joint work with Zvika Brakerski, Nico D{\"o}ttling and Sihang Pu.

  • Design and Formally Verify Post-Quantum Cryptograph

Leo Fan

Recording

The use of cryptography is ubiquitous in our daily life. However, the development of faster quantum computers can break today’s crypto-systems. Therefore, it is imperative that we develop and deploy post-quantum cryptography, before scalable quantum computers become a reality. My research focuses on designing and formally verifying cryptographic primitives based on post-quantum assumptions. My approach combines cryptography and formal methods, aiming to bring provable security to real-world applications. In this talk, I will discuss how to design a post-quantum secure encryption scheme that provides fine-grained access control over encrypted data. Further, I will describe a system called AutoLWE, capable of mechanizing security proofs of lattice-based cryptosystems.

  • PrimeMatch: Privacy preserving stock inventory matching

Antigoni Polychroniadou

Recording

In this talk we are going to present our multi-party Prime Match solution for matching orders in a stock exchange while maintaining the privacy of the orders. Information is revealed only if there is a match. To achieve our solution, we present a new protocol for secure comparison with malicious security. Prime Match is running in production since September 2022.

  • Algebraic Reductions of Knowledge

Abhiram Kothapalli

December 16, 4:30 pm

Arguments of knowledge are powerful cryptographic primitives that allow a prover to demonstrate that it knows a satisfying witness to a prescribed statement. Tremendous progress has been made in developing efficient argument systems by leveraging homomorphic structure in an increasingly composable and recursive manner. However, the extent to which homomorphisms can be composed and manipulated in the service of efficient argument systems is still not well understood. To this end, we introduce reductions of knowledge, a generalization of arguments of knowledge, which reduce checking a statement in one relation to checking a derived statement in another, and better capture the composable and recursive nature of arguments over homomorphisms. We construct and study the tensor reduction, which is capable of reducing any homomorphic statement composed via the tensor product, and provides knowledge soundness unconditionally when working over vector spaces. We show that tensor reductions generalize a large class of prior recursive techniques including the ubiquitous sumcheck protocol. We additionally show that tensor reductions can be employed to construct reductions of knowledge with logarithmic communication for familiar linear algebraic statements, and in turn, these can be composed to recover a reduction of knowledge for NP with logarithmic communication.

  • Practical Garbled RAM: GRAM with $O(log^2 n)$ Overhead

David Heath

Recording

Garbled RAM (GRAM) is a powerful technique that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling.


We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-n RAM that stores blocks of size $w = \Omega(log^2 n)$ bits, our GRAM incurs only amortized $O(w log^2 n \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as 512 128-bit elements.

  • Adaptive Security of Multi-Party Protocols, Revisited

Chen-Da Liu Zhang

Recording

The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound.


Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called ``adaptive security'' and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called ``commitment problem'', where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss.


A new natural security notion is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools.


Joint work with Martin Hirt and Ueli Maurer.


  • Sleepy Channels: Bitcoin-Compatible Bi-directional Payment Channels without Watchtowers

Sri AravindaKrishnan Thyagarajan

Recording

Payment channels (PC) are a promising solution to the scalability issue of cryptocurrencies, allowing users to perform the bulk of the transactions off-chain without needing to post everything on the blockchain. Many PC proposals however, suffer from a severe limitation: Both parties need to constantly monitor the blockchain to ensure that the other party did not post an outdated transaction. If this event happens, the honest party needs to react promptly and engage in a punishment procedure. This means that prolonged absence periods (e.g., due to a power outage) may be exploited by malicious users. As a mitigation, the community has introduced watchtowers, a third-party monitoring the blockchain on behalf of off-line users. Unfortunately, watchtowers are either trusted, which is critical from a security perspective, or they have to lock a certain amount of coins, called collateral, for each monitored PC in order to be held accountable, which is financially infeasible for a large network.

We present Sleepy Channels, the first bi-directional PC protocol without watchtowers (or any other third party) that supports an unbounded number of payments and does not require parties to be persistently online. The key idea is to confine the period in which PC updates can be validated on-chain to a short, pre-determined time window, which is where the PC parties have to be online. This behavior is incentivized by letting the parties lock a collateral in the PC, which can be adjusted depending on their mutual trust and which they get back much sooner if they are online during this time window. Our protocol is compatible with any blockchain that is capable of verifying digital signatures (e.g., Bitcoin), as shown by our proof of concept. Moreover, Sleepy Channels impose a communication and computation overhead similar to state-of-the-art PC protocols while removing watchtower's collateral and fees for the monitoring service.

  • (Im)possibility Results for Transaction Fee Mechanism Design

Hao Chung

In this talk, I will talk about my recent research result with Elaine.

In short, in blockchains such as Bitcoin and Ethereums, users compete in a transaction fee auction to get their transactions confirmed in the next block. A line of recent works set forth the desiderata for a ``dream'' transaction fee mechanism (TFM), and explored whether such a mechanism existed.

In this work, we prove a new impossibility result: assuming finite block size, no single-parameter, non-trivial, possibly randomized TFM can simultaneously satisfy truthful bidding and miner-user side contract proofness. On the other hand, we also give a relaxed version of the player's utility. In this case, we propose a mechanism that satisfies truthful bidding and miner-user side contract proofness.

  • On One-way Functions and Kolmogorov Complexity

Yanyi Liu

Recording

Abstract:

We prove the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)>2n, the following are equivalent: Cryptographic one-way functions exist; The t-time bounded Kolmogorov Complexity problem is mildly hard-on-average. In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography.

Joint work with Rafael Pass

https://eccc.weizmann.ac.il/report/2020/052/

  • Secure Massively Parallel Computation for Dishonest Majority

Rex Fernando

Recording

Abstract:

This work concerns secure protocols in the massively parallel computation (MPC) model, which is one of the most widely-accepted models for capturing the challenges of writing protocols for the types of parallel computing clusters which have become commonplace today (MapReduce, Hadoop, Spark, etc.). Recently, the work of Chan et al.\ (ITCS '20) initiated this study, giving a way to compile any MPC protocol into a secure one in the common random string model, achieving the standard secure multi-party computation definition of security with up to 1/3 of the parties being corrupt.

We are interested in achieving security for much more than 1/3 corruptions. To that end, we give two compilers for MPC protocols, which assume a simple public-key infrastructure, and achieve semi-honest security for all-but-one corruptions. Our first compiler assumes hardness of the learning-with-errors (LWE) problem, and works for any MPC protocol with ``short'' output---that is, where the output of the protocol can fit into the storage space of one machine, for instance protocols that output a trained machine learning model. Our second compiler works for any MPC protocol (even ones with a long output, such as sorting) but assumes, in addition to LWE, indistinguishability obfuscation and a circular secure variant of threshold FHE. Both protocols allow the attacker to choose corrupted parties based on the trusted setup, an improvement over Chan et al., whose protocol requires that the CRS is chosen independently of the attacker's choices.

  • Gossiping For Communication-Efficient Broadcast

Julian Loss

Recording

Abstract:

Broadcast (BC) is a crucial ingredient for a plethora of cryptographic protocols such as secret sharing and multiparty computation. In this paper we apply \emph{gossiping} (propagating a message by sending to a few random parties who in turn do the same, until the message is delivered) to design new randomized BC protocols with improved communication complexity and which are secure against an adversary controlling the majority of parties. We make progress on two fronts. First, we propose a protocol for single-sender BC in the static model of corruption that achieves $\tilde O(n^2 \cdot \kappa^2)$ bits of communication and where no trusted setup is required---parties just need to generate their own cryptographic keys. All prior protocols in this setting exhibit $ O(n^3 \cdot \kappa)$ communication. Using insights from our single-sender BC protocol, we then propose the first adaptively-secure parallel BC protocol with $\tilde O(n^2 \cdot \kappa^4)$ communication complexity, significantly improving existing parallel BC protocols of $\tilde O(n^3)$ communication. To the best of our knowledge, our parallel BC protocol is the first non-trivial one, i.e., one that is not using a single-sender BC protocol $n$ times and in a black box fashion, thus leading to the improved complexity.

  • Time- and Space-Efficient Arguments from Groups of Unknown Order

Pratik Soni

Abstract:

We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T⋅polylog(T) and space S⋅polylog(T), and the verifier runs in time n⋅polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated, assuming a trusted setup, from the hardness of factoring (products of safe primes), or, without a trusted setup, using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform.

Our proof builds on DARK (Bünz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we:

1. Identify a significant gap in the proof of security of DARK. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption).

In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.

  • Reaching Agreement Without Saying Much: Byzantine Agreement with Polylog Bits Per Party

Ran Cohen

Recording

Abstract:

Byzantine agreement (BA), the task of n parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is polylog(n) bits per party, but some parties must send \Omega(n) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating O(\sqrt n) bits.

In this talk, we explore whether asymmetry is inherent for optimizing total communication. We show that this is not the case by presenting two BA protocols where every party communicates only polylog(n) bits; the constructions rely on a new flavor of distributed signatures and offer a tradeoff between setup assumptions and cryptographic assumptions. Next, we will discuss limitations and barriers of this approach, and conclude with open questions.

This is a joint work with Elette Boyle and Aarushi Goel.

  • Proof-Carrying Data without Succinct Arguments

Benedikt Bünz

Recording

Abstract:

Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Prior approaches to construct PCD are based on recursive applications of succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this talk I will describe how to obtain PCD without relying on SNARKs. In particular, we construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size proofs) that has a split accumulation scheme, which is a weak form of accumulation that we introduce. We then exploit this new framework to achieve a more efficient PCD construction, by giving an accumulation scheme for a non-interactive argument of knowledge for R1CS with constant verification time. Concretely the recursive circuit can be as small as 3 exponentiations in a group with hard discrete logarithm. We also avoid the use of FFTs and other structures in the cryptographic group.


Our results are supported by a modular and efficient implementation.

  • Average-case Complexity Through the Lens of Interactive Puzzles

Muthuramakrishnan Venkitasubramaniam

Recording

Abstract:

Consider the following two fundamental open problems in complexity theory: (a) Does a hard-on-average language in NP imply the existence of one-way functions?, or (b) Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes.

Both one-way functions and problems in TFNP can be interpreted as promise-true distributional NP search problems---namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, ”It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true.”

This result follows from a more general study of interactive puzzles---a generalization of average-case hardness in NP—and in particular, a novel round-collapse theorem for computationally-sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information-theoretically sound protocols.

Joint work with Rafael Pass

  • DORY: An Encrypted Search System with Distributed Trust

Emma Dauterman

Recording

Abstract:

Efficient, leakage-free search on encrypted data has remained an unsolved problem for the last two decades; efficient schemes are vulnerable to leakage-abuse attacks, and schemes that eliminate leakage are impractical to deploy. To overcome this tradeoff, we reexamine the system model. We surveyed five companies providing end-to-end encrypted filesharing to better understand what they require from an encrypted search system. Based on our findings, we design and build DORY, an encrypted search system that addresses real-world requirements and protects search access patterns; namely, when a user searches for a keyword over the files within a folder, the server learns only that a search happens in that folder, but does not learn which documents match the search, the number of documents that match, or other information about the keyword. DORY splits trust between multiple servers to protect against a malicious attacker who controls all but one of the servers. We develop new cryptographic and systems techniques to meet the efficiency and trust model requirements outlined by the companies we surveyed. We implement DORY and show that it performs orders of magnitude better than a baseline built on ORAM. Parallelized across 8 servers, each with 16 CPUs, DORY takes 116ms to search roughly 50K documents and 862ms to search over 1M documents.

  • Lower Bound for Oblivious RAM with Large Cells

Wei-kai Lin

Recording

Abstract:

An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters.

In this talk, I will discuss a very natural setting of parameters in which no non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let N and w be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by b the cell bit-size of the ORAM. All previous ORAM lower bounds have a multiplicative w/b factor which makes them trivial in many settings of parameters of interest. In this talk, I will show a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). That is, any ORAM must make (amortized) Ω(log(Nw) / log(b/w))

memory probes for every logical operation. Our lower bound implies that logarithmic overhead in accesses is necessary, even if b >> w. Our lower bound is tight for settings of parameters, up to the log(b/w) factor. Our bound also extends to the non-colluding multi-server setting.

This is a joint work with Ilan Komargodski.

  • Power in Weakness: Efficient Perfectly Secure Multiplication with Optimal Resilience and Constant Time

Gilad Asharov

Abstract:

Secure computation enables $n$ mutually distrustful parties to compute a function over their private inputs jointly. A fundamental result in the area is that of Ben-Or, Goldwasser, and Wigderson (BGW) in 1988, showing that any function can be computed with perfect security in the presence of a malicious adversarial entity controlling at most $t< n/3$ parties.

A crucial part in the construction of BGW is a protocol for multiplying two shared values. Despite 30 years of research, the protocol requires sharing a total of $O(n^2)$ values per multiplication. In contrast, the semi-honest protocol of BGW and comparable protocols in the computational settings that are based on secret sharing require the sharing of $O(n)$ values for a single multiplication. In this paper close this gap, leading to a more efficient construction while maintaining a round complexity that is constant per multiplication. Our result is obtained by exploring the limits of verifiable secret sharing and constructing a protocol for weak verifiable secret sharing of a polynomial of degree-$2t$ with the same communication complexity and round complexity as strong verifiable secret sharing of a polynomial of degree-$t$.

Besides efficiency improvements, our protocol overall simplifies the BGW construction, which has also pedagogical significance due to its centrality. In addition, we show how our new approach improves the efficiency of depth-$1$ computations, e.g., matrix multiplications.

Joint work with: Ittai Abraham and Avishay Yahai

  • Leakage Abuse Attacks in Encrypted Databases

Charalampos Papamanthou

Recording

Abstract:

Since the seventies, one of the holy grails of cryptography has been the invention of encryption algorithms that allow computation to be performed directly on ciphertexts without prior decryption. While heavy cryptographic hammers like fully-homomorphic encryption and oblivious RAMs can address (versions of) the aforementioned problem with ideal security guarantees, encrypted databases provide a more practical alternative. An encrypted database achieves considerable efficiency by releasing some formally-defined and superficially harmless information, known as leakage. However, it turns out such leakage can lead to complete value reconstruction of the database! In this talk I will review some of the basic techniques to perform database reconstruction from range search leakage and then I will present my recent work on query distribution-agnostic attacks on encrypted databases. I will conclude with some suggestions about how to argue formally about the security of encrypted databases.

This talk is based on joint works with Evgenios Kornaropoulos (UC Berkeley), Alexandros Psomas (Purdue University), Dawn Song (UC Berkeley) and Roberto Tamassia (Brown University).

  • Malicious Security Comes Free in Honest-Majority Multiparty Computation

Yifan Song

Recording

Abstract:

Since the notion of Multiparty Computation (MPC) was proposed three decades ago, a lot of research and effort has been done to improve the efficiency of MPC protocols. However, the inefficiency of the current state of the art is still the major barrier which prevents MPC from being used more broadly.

In this talk, we focus on unconditional (or information-theoretical) MPC. A key feature of unconditional MPC is that we do not need any expensive cryptographic primitive (such as public-key encryption or oblivious transfer) and the protocol is secure unconditionally. Comparing with the protocols in the computational setting (i.e., with security relying on cryptographic assumptions), one major benefit is that protocols usually do not require complicated and time-consuming local computations. In particular, local computations are often just a series of linear operations. As a result, the most efficient MPC protocols are in the unconditional MPC paradigm. And the main criterion for the efficiency of unconditional MPC protocols is the amount of communication between every pair of parties.

We will start with a short review of the notion of MPC. Then we will introduce the Damgard and Nielsen protocol (DN protocol), the best-known communication-efficient unconditional MPC protocol in the semi-honest setting. Next, we will show how previous works achieve malicious security using the DN protocol. Finally, we will introduce our techniques, which allows us to achieve malicious security with the same concrete efficiency as the semi-honest DN protocol.

  • Private Information Retrieval with Sublinear Online Time

Dima Kogan

Recording

Abstract:

We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client's query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Finally, we prove that our protocols are optimal in terms of the trade-off they achieve between communication and running time. Joint work with Henry Corrigan-Gibbs.

  • Leveraging Heuristics for Private Synthetic Data Release

    Steven Wu

Recording

Abstract: This talk will focus on differentially private synthetic data---a privatized version of the dataset that consists of fake data records and that approximates the real dataset on important statistical properties of interest. I will present our recent results on private synthetic data that leverage practical optimization heuristics to circumvent the computational bottleneck in existing work. Our techniques are motivated by a modular, game-theoretic framework, which can flexibly work with methods such as integer program solvers and deep generative models.

  • Compact Adaptively Secure ABE from k-Lin: Beyond NC1 and towards NL

Ji Luo

Recording

Abstract: We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from k-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs.

Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from k-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes L and NL) based on k-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine M. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.

This talk is based on two recent works [Lin and Luo, Eurocrypt '20; Lin and Luo, Asiacrypt '21], with a focus on the former. I will present the framework for compact ABE, instantiated for ABP and DFA. Time permitting, I will discuss the ideas for succinct ABE.

  • Sublinear Time Algorithms for Graph Problems

Cliff Liu

Recording

Abstract:

With the rapid growth of the internet, many optimization problems in recent big-data applications often exceed the capacity of traditional computing platforms. My goal is to lay the theoretical foundation of big-data analysis by designing algorithms on modern computing platforms to solve large-scale optimization problems under various resource constraints, with an emphasis on time efficiency and simplicity.

The datasets in most applications are originated from or can be modeled as graphs, and this thesis focuses on solving graph problems. Usually, the graph is too large to store in a single memory, resulting in a very slow random access to the data. To capture this, one important computational model is streaming: we want to solve the problem using very limited space while minimizing the number of passes of scanning the entire data. Another framework to solve large-scale graph problems is to store the graph in a distributed manner and use the computational power of the distributed processors, which is called parallel computation. These two areas continue to be very active partly due to the success of the cloud computing platforms from Google, Microsoft, Amazon, and so on.

We obtain faster and simpler algorithms in both streaming and parallel computation models for graph problems such as connectivity, bipartite matching, and maxflow. In particular, we solved three open problems. The first one is that given a graph of n vertices and m edges, does there exist a semi-streaming (using nearly n space) algorithm that computes the maximum weight bipartite matching in o(n) passes? We answer this question positively by giving an algorithm that runs in square root m passes. The algorithm relies on space-efficient versions of the interior point method, Laplacian system solving, and the generalized isolation lemma, all of which are not known before this work. The second problem is to break the square root n depth for computing maxflow in parallel. The square root n depth barrier exists for digraph reachability, matching, and other fundamental problems. Our solution is approximate, but only runs in n^{1/3} depth for unit-capacity graphs. The third problem is to give an o(log n) depth parallel algorithm for connectivity on a PRAM (parallel random access machine). All previous PRAM algorithms use at least log n depth. Our algorithm runs in O(log d + loglog n) depth where d is the diameter, which breaks the log n barrier for small-diameter graphs. Before our work, there is only an MPC (massively parallel computing, a model that is much stronger than PRAM) algorithm and it is very complicated. Our algorithm is simpler and more practical, albeit in a much weaker model.

We also design several extremely simple algorithms for connectivity and bipartite matching. For the connectivity problem, we give several elegant algorithms (about five lines of code) in the MPC model, with a state-of-the-art running time of O(log n) for undirected graph. For the bipartite matching problem, we give a very simple streaming algorithm based on auctions that approximates the maximum matching to (1-eps) in O(eps^{-2}) passes and O(n) space, reducing the number of passes and space from previous work, which also has applications on faster MPC algorithms for matching.

  • A bounded-noise mechanism for differential privacy

Yuval Dagan

Recording

Abstract:

Answering multiple counting queries is one of the best-studied problems in differential privacy. Its goal is to output an approximation of the average $\frac{1}{n}\sum_{i=1}^n \overrightarrow{x}^{(i)}$ of vectors $ \overrightarrow{x}^{(i)}\in [0,1]^k$ , while preserving the privacy with respect to any $ \overrightarrow{x}^{(i)}$ . We present an $(\epsilon,\delta)$-private mechanism with optimal $\ell_{\infty}$ error for most values of $\delta$. This result settles the conjecture of Steinke and Ullman [2020] for the these values of $\delta$. Our algorithm adds independent noise of bounded magnitude to each of the k coordinates, while prior solutions relied on unbounded noise such as the Laplace and Gaussian mechanisms.

Joint work with Gil Kur

  • Is Private Learning Possible with Instance Encoding?

Saeed Mahloujifar

Recording

Abstract:


In this work, we study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism that modifies the training inputs before feeding them to a normal learner. We formalize the notion of instance encoding and its privacy by providing attack models. We first prove impossibility results for achieving these notions of privacy. Next, we demonstrate an attack on InstaHide, a recent proposal by Huang, Song, Li and Arora [ICML'20] that aims to use instance encoding for privacy.


  • Lockable Signatures for Blockchains: Scriptless Scripts for All Signatures

Sri Aravinda Krishnan Thyagarajan

Recording

Abstract:

Payment Channel Networks (PCNs) have given a huge boost to the scalability of blockchain-based cryptocurrencies: Beyond improving the transaction rate, PCNs enabled cheap cross-currency payments and atomic swaps. However, current PCNs proposals either heavily rely on special scripting features of the underlying blockchain (e.g. Hash Time Lock Contracts) or are tailored to a handful of digital signature schemes, such as Schnorr or ECDSA signatures. This leaves us in an unsatisfactory situation where many currencies that are being actively developed and use different signature schemes cannot enjoy the benefits of a PCN.

In this work, we investigate whether we can construct PCNs assuming the minimal ability of a blockchain to verify a digital signature, for any signature scheme. In answering this question in the affirmative, we introduce the notion of lockable signatures, which constitutes the cornerstone of our PCN protocols. Our approach is generic and the PCN protocol is compatible with any digital signature scheme, thus inheriting all favorable properties of the underlying scheme that are not offered by Schnorr/ECDSA (e.g.\ aggregatable signatures or post-quantum security).

While the usage of generic cryptographic machinery makes our generic protocol impractical, we view it as an important feasibility result as it may serve as the basis for constructing optimized protocols for specific signature schemes. To substantiate this claim, we design a highly efficient PCN protocol for the special case of Boneh-Lynn-Shacham (BLS) signatures. BLS signatures enjoy many unique features that make it a viable candidate for a blockchain, e.g. short, unique, and aggregatable signatures. Yet, prior to our work, no PCN was known to be compatible with it (without requiring an advanced scripting language). The cost of our PCN is dominated by a handful of calls to the BLS algorithms. Our concrete evaluation of these basic operations shows that users with commodity hardware can process payments with minimal overhead.


  • Batch OT with Optimal Rate

Mingkuan Xu

March 31, 2022

In this talk, we show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019).


To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).


Joint work with Zvika Brakerski, Nico D{\"o}ttling and Sihang Pu.