# CMU Cylab CRYPTO SEMINAR

The CMU CyLab Crypto Seminar is an informal seminar series at CMU. The seminar meets every Thursday at 4:30 pm ET. We will meet in person, and stream the talks live via Zoom. The recordings will be posted on the CMU Crypto Youtube Channel.

If you are interested in giving a talk or more information, please contact Quang or Nikhil.

If you would like to receive notifications for coming talks, please subscribe to the crypto-announce mailing list. Also, you can follow @cmucrypto on Twitter to get more information!

We thank CyLab for generously sponsoring this seminar series.

### Coming Talks

TBD

Yi Tang

Fall 2024

Abstract: TBD

TBD

Quang Dao

Fall 2024

Abstract: TBD

### Previous Talks

LatticeFold: A Lattice-based Folding Scheme and its Applications to Succinct Proof Systems

Binyi Chen

Abstract: Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure.

In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is highly performant compared to existing folding schemes while providing plausible post-quantum security.

This is a joint work with Dan Boneh.

Abstract: Private Information Retrieval (PIR) is a cryptographic primitive that allows a client to query for some element of a public database, held by a server, without revealing the element that the client is interested in. Recent developments in PIR have used client-side preprocessing to speed up online query times. In this model, a client can run offline to compute a hint which it later uses to issue online queries. Unfortunately, all proposed solutions in this model before this work suffer from two significant drawbacks: (1) updating an entry in the database requires some inefficient computation and (2) a client's query time is linear in their hint size.

In this work, we overcome both of these obstacles by proposing Plinko, a new PIR scheme in the client-side preprocessing model. As part of our construction, we provide a new primitive called an invertible pseudorandom function, which allows someone with the secret key to find the pre-image of some output efficiently. This primitive allows us to generically upgrade two previously proposed schemes to both: (1) update entries with nearly-constant time and communication and (2) avoid clients' linear pass through their hints, improving the asymptotic runtime for clients with large storage.

Abstract: Private Information Retrieval (PIR) enables a client to retrieve a database element from a semi-honest server while hiding the element being queried from the server. Maliciously-secure PIR (mPIR) [Colombo et al., USENIX~'23] strengthens the guarantees of plain (i.e., semi-honest) PIR by ensuring that even a misbehaving server:

* cannot compromise client privacy via selective-failure attacks, and

* must answer every query consistently (i.e., with respect to the same database).

These additional security properties are crucial for many real-world applications.

In this work we present a generic compiler that transforms any PIR scheme into an mPIR scheme in a black-box manner, with minimal overhead, and without requiring additional cryptographic assumptions. Since mPIR trivially implies PIR, our compiler establishes the equivalence of mPIR and PIR. By instantiating our compiler with existing PIR schemes, we immediately obtain mPIR schemes with O(Nε) communication cost. In fact, by applying our compiler to a recent doubly-efficient PIR [Lin et al., STOC~'23], we are able to construct a doubly-efficient mPIR scheme that requires only \polylog(N) communication and server and client computation. In comparison, all prior work incur a Ω(√N) cost in these metrics.

Our compiler makes use of smooth locally-decodable codes (LDCs) that have a robust decoding procedure. We term these codes ``subcode''-LDCs, because they are LDCs where the query responses are from an error-correcting code. This property is shared by Reed-Muller codes (whose query responses are Reed-Solomon codewords) and more generally lifted codes.

Applying our compiler requires us to consider decoding in the face of non-signaling adversaries, for reasons analogous to the need for non-signaling PCPs in the succinct-argument literature. We show how to construct such decoders for Reed–Muller codes, and more generally for smooth locally-decodable codes that have a robust decoding procedure.

Abstract: In this talk, we present the first general tight lower bound on the complexity of memory checkers with computational security.

Memory checkers, first introduced over 30 years ago by Blum, Evans, Gemmel, Kannan, and Naor (FOCS '91, Algorithmica '94), allow a user to store and maintain a large memory on a remote and unreliable server by using small trusted local storage. The user can issue instructions to the server and after every instruction, obtain either the correct value or a failure (but not an incorrect answer) with high probability. The main complexity measure of interest is the size of the local storage and the number of queries the memory checker makes upon every logical instruction. The most efficient known construction has query complexity O(log n/loglog n) and local space proportional to a computational security parameter, assuming one-way functions, where n is the logical memory size. Dwork, Naor, Rothblum, and Vaikuntanathan (TCC '09) showed that for a restricted class of “deterministic and non-adaptive” memory checkers, this construction is optimal, up to constant factors. However, going beyond the small class of deterministic and non-adaptive constructions has remained a major open problem.

In this talk, we fully resolve the complexity of memory checkers by showing that any construction with local space p and query complexity q must satisfy p >= n/(log n)^O(q). This implies, as a special case, that q >= \Omega(log n/loglog n) in any scheme, assuming that p <= n^0.999. The bound applies to any scheme with computational security, completeness 2/3, and inverse polynomial in n soundness (all of which make our lower bound only stronger). We further extend the lower bound to schemes where the read complexity q_r and write complexity q_w differ. For instance, we show the tight bound that if q_r = O(1) and p <= n^0.999, then q_w >= n^{\Omega(1)}. This is the first lower bound, for any non-trivial class of constructions, showing a read-write query complexity trade-off.

Our proof is via a delicate compression argument showing that a “too good to be true” memory checker can be used to compress random bits of information. We draw inspiration from tools recently developed for lower bounds for relaxed locally decodable codes. However, our proof itself significantly departs from these works, necessitated by the differences between settings.

Based on joint work with Elette Boyle and Ilan Komargodski.

Abstract: The existence of ``unstructured'' hard languages in NP ∩ coNP is an intriguing open question. Bennett and Gill (SICOMP, 1981) asked whether P is separated from NP ∩ coNP relative to a random oracle, a question that remained open ever since. While a hard language in NP ∩ coNP can be constructed in a black-box way from a one-way permutation, for which only few (structured) candidates exist, Bitansky et al.\ (SICOMP, 2021) ruled out such a construction based on an injective one-way function, an unstructured primitive that is easy to instantiate heuristically. In fact, the latter holds even with a black-box use of indistinguishability obfuscation.

We give the first evidence for the existence of unstructured hard languages in NP ∩ coNP by showing that if UP ⊈ RP, which follows from the existence of injective one-way functions, the answer to Bennett and Gill's question is affirmative: with probability 1 over a random oracle 𝒪, we have that P𝒪 ≠ NP𝒪 ∩ coNP𝒪. Our proof gives a constructive non-black-box approach for obtaining candidate hard languages in NP ∩ coNP from cryptographic hash functions.

The above conditional separation builds on a new construction of non-interactive zero-knowledge (NIZK) proofs, with a computationally unbounded prover, to convert a hard promise problem into a hard language. We obtain such NIZK proofs for NP, with a uniformly random reference string, from a special kind of hash function which is implied by (an unstructured) random oracle. This should be contrasted with previous constructions of such NIZK proofs that are based on one-way permutations or other structured primitives, as well as with (computationally sound) NIZK arguments in the random oracle model.

This joint work with Riddhi Ghosal, Yuval Ishai, Alexis Korb, Eyal Kushilevitz, and Amit Sahai.

Abstract: Threshold signature is one of the most important cryptographic primitives in distributed systems. A popular choice of threshold signature scheme is the BLS threshold signature introduced by Boldyreva (PKC'03). Some attractive properties of Boldyreva's threshold signature are that the signatures are unique and short, the signing process is non-interactive, and the verification process is identical to that of non-threshold BLS. These properties have resulted in its practical adoption in several decentralized systems. However, despite its popularity and wide adoption, up until recently, the Boldyreva scheme has been proven secure only against a static adversary. Very recently, Bacho and Loss (CCS'22) presented the first proof of adaptive security for Boldyreva's threshold signature, but they have to rely on strong and non-standard assumptions such as the hardness of one-more discrete log (OMDL) and the Algebraic Group Model~(AGM).

In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing group in the Random Oracle Model (ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. Moreover, to achieve static security, our scheme only needs the hardness of CDH in the ROM, which is the same as the standard non-threshold BLS signature. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security. We also present an efficient distributed key generation (DKG) protocol to set up the signing keys for our signature scheme. We implement our scheme in Go and evaluate its signing and aggregation costs.

Abstract: One-way functions are central to classical cryptography. They are necessary for the existence of non-trivial classical cryptosystems, and also sufficient to realize meaningful primitives including commitments, pseudorandom generators and digital signatures. At the same time, a mounting body of evidence suggests that assumptions even weaker than one-way functions may suffice for many cryptographic tasks of interest in a quantum world, including bit commitments and secure multi-party computation.

This work studies one-way state generators [Morimae-Yamakawa, CRYPTO 2022], a natural quantum relaxation of one-way functions. Given a secret key, a one-way state generator outputs a hard to invert quantum state. A fundamental question is whether this type of quantum one-wayness suffices to realize quantum cryptography. We obtain an affirmative answer to this question, by proving that one-way state generators with pure state outputs imply quantum bit commitments and secure multiparty computation.

Along the way, we use efficient shadow tomography [Huang et. al., Nature Physics 2020] to build an intermediate primitive with classical outputs, which we call a (quantum) one-way puzzle. Our main technical contribution is a proof that one-way puzzles imply quantum bit commitments. This proof develops new techniques for pseudoentropy generation [Hastad et. al., SICOMP 1999] from arbitrary distributions, which may be of independent interest.

Accountability for Misbehavior in Threshold Decryption via Threshold Traitor Tracing

Aditi Partap

Abstract: A $t$-out-of-$n$ threshold decryption system assigns key shares to $n$ parties so that any $t$ of them can decrypt a well-formed ciphertext. Existing threshold decryption systems are {\em not secure} when these parties are rational actors: an adversary can offer to pay the parties for their key shares. The problem is that a quorum of $t$~parties, working together, can sell the adversary a decryption key that reveals nothing about the identity of the traitor parties. This provides a risk-free profit for the parties since there is no accountability for their misbehavior --- the information they sell to the adversary reveals nothing about their identity. This behavior can result in a complete break in many applications of threshold decryption, such as encrypted mempools, private voting, and sealed-bid auctions.

In this work we propose a solution to this problem. Suppose a quorum of~$t$ or more parties construct a decoder algorithm~$D(\cdot)$ that takes as input a ciphertext and outputs the corresponding plaintext or $\bot$. They sell~$D$ to the adversary. Our threshold decryption systems are equipped with a tracing algorithm that can trace~$D$ to members of the quorum that created it. The tracing algorithm is only given blackbox access to~$D$ and will identify some members of the misbehaving quorum. The parties can then be held accountable, which may discourage them from selling the decoder~$D$ in the first place.

Our starting point is standard (non-threshold) traitor tracing, where $n$ parties each holds a secret key. Every party can decrypt a well-formed ciphertext on its own. However, if a subset of parties ${\cal J} \subseteq [n]$ collude to create a pirate decoder $D(\cdot)$ that can decrypt well-formed ciphertexts, then it is possible to trace $D$ to at least one member of ${\cal J}$ using only blackbox access to the decoder~$D$.

In this work we develop the theory of traitor tracing for threshold decryption, where now only a subset ${\cal J} \subseteq [n]$ of~$t$ or more parties can collude to create a pirate decoder $D(\cdot)$. This problem has recently become quite important due to the real-world deployment of threshold decryption in encrypted mempools, as we explain in the paper. While there are several non-threshold traitor tracing schemes that we can leverage, adapting these constructions to the threshold decryption settings requires new cryptographic techniques. We present a number of constructions for traitor tracing for threshold decryption, and note that much work remains to explore the large design space.

Abstract: Time-Lock Puzzles (TLPs) are a powerful tool for concealing messages until a predetermined point in time. When solving multiple puzzles, it becomes crucial to have the ability to batch-solve puzzles, i.e., simultaneously open multiple puzzles while working to solve a single one. Unfortunately, all previously known TLP constructions that support batch solving rely on super-polynomially secure indistinguishability obfuscation, making them impractical.

In light of this challenge, we present novel TLP constructions that offer batch-solving capabilities without using heavy cryptographic hammers. Our proposed schemes are simple and concretely efficient, and they can be constructed based on well-established cryptographic assumptions based on pairings or learning with errors (LWE). Along the way, we introduce new constructions of puncturable key- homomorphic PRFs both in the lattice and in the pairing setting, which may be of independent interest. Our analysis leverages an interesting connection to Hall’s marriage theorem and incorporates an optimized combinatorial approach, enhancing the practicality and feasibility of our TLP schemes. Furthermore, we introduce the concept of “rogue-puzzle attacks”, where maliciously crafted puzzle instances may disrupt the batch-solving process of honest puzzles. We then propose constructions of concrete and efficient TLPs designed to prevent such attacks.

Joint work with: Jesko Dujmovic and Giulio Malavolta

Abstract:

In an (α,β)-weak non-interactive zero knowledge (NIZK), the soundness error is at most α and the zero-knowledge error is at most β. Goyal, Jain, and Sahai (CRYPTO 2019) show that if α+β<1 for some constants α,β, then (α,β)-weak NIZK can be turned into fully-secure NIZK, assuming sub-exponentially-secure public-key encryption. We revisit the problem of NIZK amplification:

– We amplify NIZK arguments assuming only polynomially-secure public-key encryption, for any constants α+β<1.

– We amplify NIZK proofs assuming only one-way functions, for any constants α+β<1.

– When the soundness error α is negligible to begin with, we can also amplify NIZK arguments assuming only one-way functions.

Our results are based on the hidden-bits paradigm, and can be viewed as a reduction from NIZK amplification to the better understood problem of pseudorandomness amplification.

Joint work with Nir Bitansky.

Abstract: Proof-carrying data (PCD) is a widely used cryptographic primitive that can be obtained by recursively-composing SNARKs or related primitives. However, these constructions do not come with security analyses that yield useful concrete security bounds.

In this work we show that the PCD obtained from SNARKs with straightline knowledge soundness has essentially the same security as the underlying SNARK. In this setting, recursive composition incurs no security loss.

As a notable application, our work offers an idealized model that provides useful, albeit heuristic, guidance for setting the security parameters of recursive STARKs currently used in blockchain systems.

Based on https://eprint.iacr.org/2023/1646.pdf, joint work with Alessandro Chiesa, Shahar Samocha, and Eylon Yogev.

Abstract: Secret sharing allows a user to split a secret into many shares so that the secret can be recovered if, and only if, an authorized set of shares is collected. Although secret sharing typically does not require any computational hardness assumptions, its security does require that an adversary cannot collect an authorized set of shares. Over long periods of time where an adversary can benefit from multiple data breaches, this may not be a realistic assumption.

We initiate the systematic study of secret sharing with certified deletion in order to achieve security even against an adversary that eventually collects an authorized set of shares. In secret sharing with certified deletion, a (classical) secret s is split into quantum shares which can be destroyed in a manner verifiable by the dealer. We put forth two natural definitions of security. No-signaling security roughly requires that if multiple non-communicating adversaries delete sufficiently many shares, then their combined view contains negligible information about s, even if the total set of corrupted parties forms an authorized set. Adaptive security requires privacy of s against an adversary that can continuously and adaptively corrupt new shares and delete previously-corrupted shares, as long as the total set of corrupted shares minus deleted shares remains unauthorized.

Next, we show that these security definitions are achievable: we show how to construct (i) a secret sharing scheme with no-signaling certified deletion for any monotone access structure, and (ii) a threshold secret sharing scheme with adaptive certified deletion. Our first construction uses Bartusek and Khurana's (CRYPTO 2023) 2-out-of-2 secret sharing scheme with certified deletion as a building block, while our second construction is built from scratch and requires several new technical ideas. For example, we significantly generalize the ``XOR extractor'' of Agarwal, Bartusek, Khurana, and Kumar (EUROCRYPT 2023) in order to obtain better seedless extraction from certain quantum sources of entropy, and show how polynomial interpolation can double as a high-rate randomness extractor in our context of threshold sharing with certified deletion.

Abstract: Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program in a quantum state such that a user in possession of k such states cannot create (k+1) working copies. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve.

In this work, we construct public-key encryption and functional encryption schemes whose secret keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure iO, one-way functions and LWE. This resolves a long-standing open question of constructing fully collusion-resistant copy-protected functionalities raised by multiple previous works.

Prior to our work, copy-protected functionalities were known only in restricted collusion models where either an a-priori bound on the collusion size was needed, in the plain model with the same assumptions as ours (Liu, Liu, Qian, Zhandry [TCC'22]), or adversary was only prevented from doubling their number of working programs, in a structured quantum oracle model (Aaronson [CCC'09]).

We obtain our results through a novel technique which uses identity-based encryption to construct unbounded collusion resistant copy-protection schemes from 1-to-2 secure schemes. This is analogous to the technique of using digital signatures to construct full-fledged quantum money from single banknote schemes (also called mini-schemes) (Lutomirski et al. [ICS'09], Farhi et al. [ITCS'12], Aaronson and Christiano [STOC'12]). We believe our technique is of independent interest.

Along the way, we also construct a functional encryption scheme whose master secret key can be punctured at all functions f such that f(m_0) != f(m_1). This might also be of independent interest.

Abstract: Over the past few years, homomorphic secret sharing (HSS) emerged as a compelling alternative to fully homomorphic encryption (FHE), due to its feasibility from an array of standard assumptions and its potential efficiency benefits. However, all known HSS schemes, with the exception of schemes built from FHE or indistinguishability obfuscation (iO), can only support two or four parties.

In this work, we give the first construction of a multi-party HSS scheme for a non-trivial function class, from an assumption not known to imply FHE. In particular, we construct an HSS scheme for an arbitrary number of parties with an arbitrary corruption threshold, supporting evaluations of multivariate polynomials of degree (log / log log) over arbitrary finite fields. As a consequence, we obtain a secure multiparty computation (MPC) protocol for any number of parties, with (slightly) sub-linear per-party communication of roughly O(S / log log S) bits when evaluating a layered Boolean circuit of size S.

Our HSS scheme relies on the Sparse Learning Parity with Noise assumption, a standard variant of LPN with a sparse public matrix that has been studied and used in prior works. Thanks to this assumption, our construction enjoys several unique benefits. In particular, it can be built on top of any linear secret sharing scheme, producing noisy output shares that can be error-corrected by the decoder. This yields HSS for low-degree polynomials with optimal download rate. Unlike prior works, our scheme also has a low computation overhead in that the per-party computation of a constant degree polynomial takes O(M) work, where M is the number of monomials.

Abstract: Registration-based encryption (RBE) was recently introduced as an alternative to identity-based encryption (IBE), to resolve the key-escrow problem: In RBE, the trusted authority is substituted with a weaker entity, called the key curator, who has no knowledge of any secret key. Users generate keys on their own and then publicly register their identities and their corresponding public keys to the key curator. RBE is a promising alternative to IBE, retaining many of its advantages while removing the key-escrow problem, the major drawback of IBE. Unfortunately, all existing constructions of RBE use cryptographic schemes in a non black-box way, which makes them prohibitively expensive. It has been estimated that the size of an RBE ciphertext would be in the order of terabytes (though no RBE has even been implemented).

In this work, we propose a new approach to construct RBE, from standard assumptions in bilinear groups. Our scheme is black-box and it is concretely highly efficient—a ciphertext is 914 bytes. To substantiate this claim, we implemented a prototype of our scheme and we show that it scales to millions of users. The public parameters of the scheme are on the order of kilobytes. The most expensive operation (registration) takes at most a handful of seconds, whereas the encryption and decryption runtimes are on the order of milliseconds. This is the first-ever implementation of an RBE scheme and demonstrates that the practical deployment of RBE is already possible with today’s hardware.

Abstract: In this work we present a novel actively secure dishonest majority MPC protocol, SUPERPACK, whose efficiency improves as the number of honest parties increases. Concretely, let 0 < ϵ < 1/2 and consider an adversary that corrupts t < n(1 − ϵ) out of n parties. SUPERPACK requires 6/ϵ field elements of online communication per multiplication gate across all parties, assuming circuit-dependent preprocessing, and 10/ϵ assuming circuit-independent preprocessing. In contrast, most of the previous works such as SPDZ (Damgård et al, ESORICS 2013) and its derivatives perform the same regardless of whether there is only one honest party or a constant (non-majority) fraction of honest parties. A notable exception is due to Goyal et al (CRYPTO 2022), which achieves 58/ϵ + 96/ϵ2 field elements assuming circuit-independent preprocessing. Our work improves this result substantially by a factor of at least 25 in the circuit-independent preprocessing model.

Practically, we also compare our work with the best concretely efficient online protocol Turbospeedz (Ben-Efraim et al, ACNS 2019), which achieves 2(1 − ϵ)n field elements per multiplication gate among all parties. Our online protocol improves over Turbospeedz as n grows, and as ϵ approaches 1/2. For example, if there are 90% corruptions (ϵ = 0.1), with n = 50 our online protocol is 1.5× better than Turbospeedz and with n = 100 this factor is 3×, but for 70% corruptions (ϵ = 0.3) with n = 50 our online protocol is 3.5× better, and for n = 100 this factor is 7×.

Our circuit-dependent preprocessing can be instantiated from OLE/VOLE. The amount of OLE/VOLE correlations required in our work is a factor of ≈ ϵn/2 smaller than these required by Le Mans (Rachuri and Scholl, CRYPTO 2022) leveraged to instantiate the preprocessing of Turbospeedz. Our dishonest majority protocol relies on packed secret-sharing and leverages ideas from the honest majority TURBOPACK (Escudero et al, CCS 2022) protocol to achieve concrete efficiency for any circuit topology, not only SIMD. We implement both SUPERPACK and Turbospeedz and verify with experimental results that our approach indeed leads to more competitive runtimes in distributed environments with a moderately large number of parties.

Single-Server Secure Aggregation with Applications to Private Federated Learning

Yiping Ma

Oct 12, 2023

Abstract: Federated learning is a decentralized approach for training models using data distributed across many devices, without the data leaving the devices. This decentralized training paradigm addresses systemic risks of data misuse, yet it also introduces new privacy and security challenges that cannot be resolved by techniques originally designed in the centralized setting.

In this talk, I will discuss these challenges and recent advances in addressing them. In particular, I will present Flamingo, a system built for efficient privacy-preserving federated training. I will introduce new cryptographic protocols and systems-level optimizations that make Flamingo practically viable for real-world model training. Using a multi-agent message simulator, we show that Flamingo can securely train a neural network on CIFAR-100 dataset with only a 2-3X overhead in the overall runtime compared to a non-private federated learning system.

Abstract: The Range Avoidance Problem (AVOID) asks to find a string outside of the range of a given circuit C: {0,1}^n→{0,1}^m, where m>n. This problem is total as the existence of solution is ensured by the dual Pigeonhole Principle. A recent line of works shows that deterministic solutions (FP, FNP, or FP^NP) for AVOID is closely related to circuit lower bounds, explicit construction of combinatorial objects, and proof complexity generators.

In this talk, we will show that AVOID has no deterministic polynomial-time algorithm assuming witness encryption and NP != coNP. Moreover, we will also explain a generalization of the result that conditionally separates APC1 and PV, two important theories in bounded arithmetic.

This is a joint work with Rahul Ilango and Ryan Williams.

Owl: Compositional Verification of Security Protocols via an Information-Flow Type System

Josh Gancher

Abstract: Security protocols, such as TLS, Kerberos, and WireGuard, are not only complex in their implementations -- often involving nontrivial optimizations for maximal performance -- but are also complex in their designs, requiring significant proof effort to verify their cryptographic security guarantees. Thus, to confidently rely on these protocols for security-critical tasks, we need to ensure that these protocols are simultaneously running correctly and implement secure designs.

To address this need, we introduce Owl, a new programming language for developing security protocols. Using Owl, developers express their protocols using a novel mix of refinement and information flow types; in turn, all well-typed programs are guaranteed to be secure. Indeed, Owl, for the first time, simultaneously guarantees computational soundness against PPT attackers, a high degree of proof automation, and type-based abstractions for modular protocol developments.

This talk will introduce our work on Owl, including our new proof techniques for guaranteeing the existence of cryptographic reductions via type systems, and our ongoing work to extract verified, high-performance Rust code from Owl protocols using the Verus toolchain.

On the (Im)possibility of Distributed Samplers: Lower Bounds and Party-Dynamic Constructions

Damiano Abram

Abstract: Distributed samplers, introduced by Abram, Scholl and Yakoubov (Eurocrypt ’22), are a one-round, multi-party protocol for securely sampling from any distribution. We give new lower and upper bounds for constructing distributed samplers in challenging scenarios.

First, we consider the feasibility of distributed samplers with a malicious adversary in the standard model; the only previous construction in this setting relies on a random oracle. We show that for any UC-secure construction in the standard model, even with

a CRS, the output of the sampling protocol must have low entropy. This essentially implies that this type of construction is useless in applications.

Secondly, we study the question of building distributed samplers in the party-dynamic setting, where parties can join in an ad-hoc manner, and the total number of parties is unbounded. Here, we obtain positive results. First, we build a special type of unbounded

universal sampler, which after a trusted setup, allows sampling from any distributed with unbounded size. Our construction is in the shared randomness model, where the parties have access to a shared random string, and uses indistinguishability obfuscation

and somewhere statistically binding hashing. Next, using our unbounded universal sampler, we construct distributed universal samplers in the party-dynamic setting. Our first construction satisfies one-time selective security in the shared randomness model.

Our second construction is reusable and secure against a malicious adversary in the random oracle model. Finally, we show how to use party-dynamic, distributed universal samplers to produce ideal, correlated randomness in the party-dynamic setting, in a single

round of interaction. This primitive proposes a solution to the problem of trusted setups: if all major institutions around the world broadcast, once and for all, a party-dynamic distributed universal sampler message, any subset of parties on earth can non-intercatively

implement a trusted setup for their MPC protocol by just using the distributed universal sampler messages published by the institutions they trust. The institutions do not need to be online when the MPC protocol is executed, nor they need to be aware of what

MPC is run or the set of participants: they role is concluded after their message is published.

Work carried out in collaboration with Maciej Obremski and Peter Scholl.

A General Composition Theorem for Differentially Oblivious Mechanisms

Hubert Chan

June 29, 2023

Abstract: It is well-known that memory access patterns to encrypted data can leak sensitive information. Fully oblivious algorithms have been investigated whose memory access patterns are obfuscated and do not leak any information about secret inputs. However, to ensure full obliviousness, it has been proved that logarithmic or even polynomial slow-down is necessary in some cases.

Inspired by differential privacy (DP), differentially oblivious (DO) mechanisms have been studied, where the adversary cannot distinguish between access patterns easily for neighboring inputs, i.e., inputs that are close. Indeed, the lower bounds for some fully oblivious mechanisms (such as merging or database join) can be circumvented if one considers the relaxed notion of differential obliviousness.

However, until our recent work in Eurocrypt ’23, it is not clear how differentially oblivious mechanisms can be composed together. Contrasting with differential privacy composition, differentially oblivious mechanisms only have privacy guarantees on the access patterns, but do not specify any properties of the outputs that will be passed to another mechanism. In the paper, an additional neighbor preserving property of DO mechanisms has been proposed, which allows a basic composition theorem for DO mechanisms.

In this talk, we will explain the technical insights behind this notion of neighbor preserving DO mechanisms. Moreover, in a follow-up work, we have investigated an alternative interpretation that unifies DO composition with standard DP composition. The result is that we can generalize any divergence-based DP notion (such as Renyi DP, zero-concentrated DP, f-DP) to DO mechanisms. Moreover, any DP composition theorem can also be readily transformed to a counterpart for DO composition. Only some familiarity with differential privacy is assumed for this talk.

The work on DO composition is based on collaborations with Shir Maimon, Elaine Shi, Mengshi Zhao, Mingxun Zhou.

Practical Asynchronous High-threshold Distributed Key Generation and Distributed Polynomial Sampling

Sourav Das

May 18, 2023

Abstract: Distributed Key Generation (DKG) is a technique to bootstrap threshold cryptosystems without a trusted party. DKG is an essential building block to many decentralized protocols such as randomness beacons, threshold signatures, Byzantine consensus, and multiparty computation. While significant progress has been made recently, existing asynchronous DKG constructions are inefficient when the reconstruction threshold is larger than one-third of the total nodes.

In this paper, we present a simple and concretely efficient asynchronous DKG (ADKG) protocol among n=3t+1 nodes that can tolerate up to t malicious nodes and support any reconstruction threshold l>=t. Our protocol has an expected O(kn^3) communication cost, where k is the security parameter, and only assumes the hardness of the Discrete Logarithm. The core ingredient of our ADKG protocol is an asynchronous protocol to secret share a random polynomial of degree l>=t, which has other applications, such as asynchronous proactive secret sharing and asynchronous multiparty computation. We implement our high-threshold ADKG protocol and evaluate it using a network of up to 128 geographically distributed nodes. Our evaluation shows that our high-threshold ADKG protocol reduces the running time by 90% and bandwidth usage by 80% over the state-of-the-art.

This talk is a combination of two papers: Asymptotically Free Broadcast in Constant Expected Time via Packed VSS (https://eprint.iacr.org/2022/1266), and Detect, Pack and Batch: Perfectly-Secure MPC with Linear Communication and Constant Expected Time (https://eprint.iacr.org/2023/557). Both are with Ittai Abraham, Shravani Patil and Arpita Patra.

Deniable Authentication is a highly desirable property for secure messaging protocols: it allows a sender Alice to authentically transmit messages to a designated receiver Bob in such a way that only Bob gets convinced that Alice indeed sent these messages. In particular, it guarantees that even if Bob tries to convince a (non-designated) party Judy that Alice sent some message, and even if Bob gives Judy his own secret key, Judy will not be convinced: as far as Judy knows, Bob could be making it all up!

In this work we study Deniable Authentication in a new setting, where Judy can additionally obtain Alice's secret key. Informally, we want that knowledge of Alice's secret key does not help Judy in learning whether Alice sent any messages, even if Bob does not have Alice's secret key and even if Bob cooperates with Judy by giving her his own secret key.

Multiple protocols implementing exciting blockchain-based cryptographic functionalities (e.g., time-lock encryption, one-time programs, and fair multi-party computation) assume the existence of a cryptographic primitive called extractable witness encryption. Unfortunately, there are no known efficient constructions (or even constructions based on any well-studied assumptions) of extractable witness encryption. In this work, we propose a protocol that uses a blockchain to provide a functionality that is effectively the same as extractable witness encryption. Hence, by making small adjustments to existing blockchains, we can easily implement applications that rely on extractable witness encryption. This includes both new applications, and those that previously existed only as theoretical designs. As a key building block, our protocol uses a new and highly efficient batched dynamic proactive secret sharing (DPSS) scheme which may be of independent interest.

Another Round of Breaking and Making Quantum Money: How to Not Build It from Lattices, and More

Jiahui Liu

Public verification of quantum money has been one of the central objects in quantum cryptography ever since Wiesner's pioneering idea of using quantum mechanics to construct banknotes against counterfeiting. So far, we do not know any publicly-verifiable quantum money scheme that is provably secure from standard assumptions.

In this talk, we provide both negative and positive results for publicly verifiable quantum money.

**In the first part, we give a general theorem, showing that a certain natural class of quantum money schemes from lattices cannot be secure. We use this theorem to break the recent quantum money scheme of Khesin, Lu, and Shor.

**In the second part, we propose a framework for building quantum money and quantum lightning we call invariant money which abstracts some of the ideas of quantum money from knots by Farhi et al.(ITCS'12). In addition to formalizing this framework, we provide concrete hard computational problems loosely inspired by classical knowledge-of-exponent assumptions, whose hardness would imply the security of quantum lightning, a strengthening of quantum money where not even the bank can duplicate banknotes.

**We discuss potential instantiations of our framework, including an oracle construction using cryptographic group actions and instantiations from rerandomizable functional encryption, isogenies over elliptic curves, and knots.

Joint work with Hart Montgomery and Mark Zhandry.

A Lower Bound on the Length of Signatures based on Group Actions and Generic Isogenies

Jiaxin Guan

We give the first black box lower bound for signature protocols that can be described as group actions, which include many based on isogenies. We show that, for a large class of signature schemes making black box use of a (potentially non-abelian) group action, the signature length must be Ω(λ^2/log λ). Our class of signatures generalizes all known signatures that derive security exclusively from the group action, and our lower bound matches the state of the art, showing that the signature length cannot be improved without deviating from the group action framework.

Increasing deployment of advanced zero-knowledge proof systems, especially zkSNARKs, has raised critical questions about their security against real-world attacks. Two classes of attacks of concern in practice are adaptive soundness attacks, where an attacker can prove false statements by choosing its public input after generating a proof, and malleability attacks, where an attacker can use a valid proof to create another valid proof it could not have created itself. Prior work has shown that simulation-extractability (SIM-EXT), a strong notion of security for proof systems, rules out these attacks.

In this work, we prove that two transparent, discrete-log-based zkSNARKs, Spartan and Bulletproofs, are simulation-extractable (SIM-EXT) in the random oracle model if the discrete logarithm assumption holds in the underlying group. Since these assumptions are required to prove standard security properties for Spartan and Bulletproofs, our results show that SIM-EXT is, surprisingly, “for free” with these schemes. Our result is the first SIM-EXT proof for Spartan and encompasses both linear- and sublinear-verifier variants. Our result for Bulletproofs encompasses both the aggregate range proof and arithmetic circuit variants, and is the first to not rely on the algebraic group model (AGM), resolving an open question posed by Ganesh et al. (EUROCRYPT’22). As part of our analysis, we develop a generalization of the tree-builder extraction theorem of Attema et al. (TCC’22), which may be of independent interest.

Based on joint work with Paul Grubbs.

We construct a sublinear-time single-server pre-processing Private Information Retrieval (PIR) scheme with optimal client storage and server computation (up to poly-logarithmic factors), only relying on the assumption of the existence of One Way Functions (OWF). Our scheme achieves amortized $\tilde{O}(\sqrt{n})$ online server computation and client computation and $O(\sqrt{n})$ online communication per query, and requires $\tilde{O}_{\lambda}(n)$ client storage. Unlike prior single-server PIR schemes that rely on heavy cryptographic machinery such as Homomorphic Encryption, our scheme only utilizes lightweight cryptography such as PRFs, which is easily instantiated in practice. To our knowledge, this is the first practical implementation of a single-server sublinear-time PIR scheme. Compared to existing linear time single-server solutions, our schemes are faster by 10−300× and are comparable to the fastest two-server schemes. In particular, for a 100GB database of 1.6 billion entries, our experiments show that our scheme has less than 40ms online computation time on a single core.

Federated learning (FL) is an increasingly popular approach for machine learning (ML) when the training dataset is highly distributed. Clients perform local training on their datasets and the updates are then aggregated into the global model. Existing protocols for aggregation are either inefficient or don’t consider the case of malicious actors in the system. This is a major barrier to making FL an ideal solution for privacy-sensitive ML applications.

In this talk, I will present ELSA, a secure aggregation protocol for FL that breaks this barrier - it is efficient and addresses the existence of malicious actors (clients + servers) at the core of its design. Similar to prior work Prio and Prio+, ELSA provides a novel secure aggregation protocol built out of distributed trust across two servers that keeps individual client updates private as long as one server is honest, defends against malicious clients, and is efficient end-to-end. Compared to prior works, the distinguishing theme in ELSA is that instead of the servers generating cryptographic correlations interactively, the clients act as untrusted dealers of these correlations without compromising the protocol’s security. This leads to a much faster protocol while also achieving stronger security at that efficiency compared to prior work. We introduce new techniques that retain privacy even when a server is malicious at a small added cost of 7-25% in runtime with a negligible increase in communication over the case of a semi-honest server.

ELSA improves end-to-end runtime over prior work with similar security guarantees by big margins - single-aggregator RoFL by up to 305x (for the models we consider), and distributed-trust Prio by up to 8x (with up to 16x faster server-side protocol). Additionally, ELSA can be run in a bandwidth-saver mode for clients who are geographically bandwidth-constrained - an important property that is missing from prior works.

Based on joint work with Conghao Shen, Sameer Wagh, and Raluca Ada Popa.

Obfuscation of Pseudo-Deterministic Quantum Circuits

James Bartusek

No recording

We will show how to obfuscate pseudo-deterministic quantum circuits in the classical oracle model, assuming the quantum hardness of learning with errors. Instantiating the classical oracle with a candidate post-quantum indistinguishability obfuscator, we obtain the first candidate construction of indistinguishability obfuscation for a class of circuits that is powerful enough to implement Shor's algorithm.

Along the way, we construct a publicly-verifiable classical verification of quantum computation scheme for quantum “partitioning” circuits, which can be used to verify the evaluation procedure of Mahadev's quantum fully-homomorphic encryption scheme. We achieve this by constructing a type of publicly-decodable “Pauli functional commitment” scheme, which must satisfy a notion of binding against committers that have access to both of the receiver's standard and Hadamard basis decoding functionalities.

Based on joint work with Fuyuki Kitagawa, Ryo Nishimaki, and Takashi Yamakawa.

Succinct Vector, Polynomial, and Functional Commitments from Lattices

David Wu

No recording

In a functional commitment scheme, a user can commit to an input x and later on, open it to an arbitrary function evaluation f(x). We require that both the commitment and the opening be short. Important special cases of functional commitments include vector commitments and polynomial commitments. In this talk, I will introduce a new lattice-based framework for constructing functional commitments that supports functions computable by arbitrary (bounded-depth) Boolean circuits. Our constructions rely on a new falsifiable "basis-augmented SIS" assumption that we introduce, which can be viewed as a new "q-type" variant of the standard SIS assumption.

Joint work with Hoeteck Wee

Cryptography Meets Game Theory: Decentralized Mechanism Design

Ke Wu

No recording

Recent works of Roughgarden (EC’21) and Chung and Shi (SODA’23) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM.

Besides today’s model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. While we show that cryptography allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model, it is not a panacea. We still have a zero-miner revenue limitation. To overcome this impossibility, we introduce a mildly stronger reasonable-world assumption, under which we can design mechanisms that achieve optimal miner revenue. We also systematically explore the mathematical landscape of transaction fee mechanism design under the new MPC-assisted model and demonstrate how reasonable-world assumptions can alter the feasibility and infeasibility landscape.

Based on joint work with Elaine Shi and Hao Chung https://eprint.iacr.org/2022/1294 and https://eprint.iacr.org/2023/283.

Machine learning models are not private, and they often leak details of their training data. Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D' that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private.

In this talk we show how by actually instantiating the hypothetical security game, it is possible to *lower*-bound the privacy leakage. Then, by constructing increasingly powerful attacks, we can give tighter and tighter "auditing" of a particular algorithm. We design an improved auditing scheme that yields tight privacy estimates for natural datasets by training only two models. These types of auditing mechanisms also allow us to find bugs in proposed DP training algorithms.

Traditionally one assumes a regulator (or potential future consumers) would punish a platform for deviating from their promised specification. However, this is not always possible, even with publicly available data. For example, auctions are the main building block of how we transact on eBay or how Google sells advertising; however, it is impossible to know whether or not a self-interested auctioneer is also bidding in their auction with a fake identity. In this light, game theory provides a new perspective on designing secure systems because in many real-world applications adversaries are not intentionally malicious, but rather rational and economically driven. First, I will overview the challenges of designing Internet auctions when auctioneers are not trusted and show a cryptographic auction that overcomes known impossibility results from economic theory. Beyond auctions, distributed systems like blockchains aim to enable more transparent applications. In the second part, I will overview my contributions toward the design of incentive-compatible sustainable blockchains and how they allow us to design transparent algorithms.

Efficient Zero-Knowledge Proof: Theory and Practice

Jiaheng Zhang

No recording

In this talk, we discuss a cryptographic tool named zero-knowledge proof from both theory and application perspectives. In theory, we present Libra, the first zero-knowledge protocol with optimal prover time, fast verifier time, and succinct proof size. Libra also has excellent concrete efficiency in practice. In application, we present the first solution for building trustless and permissionless cross-chain bridges in blockchains using zero-knowledge proof. In addition, we discuss how to apply zero-knowledge to machine learning and make the protocol practical to guarantee the integrity of machine learning models by the example of the decision tree model. These applied ZKP protocols have rigorous security guarantees along with practical efficiency.

Indistinguishability Obfuscation via Mathematical Proofs of Equivalence

Zhengzhong Jin

No recording

Over the last decade, indistinguishability obfuscation (iO) has emerged as a seemingly omnipotent primitive in cryptography. Moreover, recent breakthrough work has demonstrated that iO can be realized from well-founded assumptions. A thorn to all this remarkable progress is a limitation of all known constructions of general-purpose iO: the security reduction incurs a loss that is exponential in the input length of the function. This ``input-length barrier'' to iO stems from the non-falsifiability of the iO definition and is discussed in folklore as being possibly inherent. It has many negative consequences; notably, constructing iO for programs with inputs of unbounded length remains elusive due to this barrier.

We present a new framework aimed towards overcoming the input-length barrier. Our approach relies on short mathematical proofs of functional equivalence of circuits (and Turing machines) to avoid the brute-force ``input-by-input'' check employed in prior works.

- We show how to obfuscate circuits that have efficient proofs of equivalence in Propositional Logic with a security loss independent of input length.

- Next, we show how to obfuscate Turing machines with unbounded length inputs, whose functional equivalence can be proven in Cook's Theory PV.

- Finally, we demonstrate applications of our results to succinct non-interactive arguments and witness encryption, and provide guidance on using our techniques for building new applications.

To realize our approach, we depart from prior work and develop a new gate-by-gate obfuscation template that preserves the topology of the input circuit.

Based on the joint work with Abhishek Jain.

In this talk, I introduce TreePIR, a two-server PIR protocol with sublinear amortized server time and polylogarithmic bandwidth whose security can be based on just the DDH assumption. TreePIR can be partitioned in two phases, both sublinear: The first phase is remarkably simple and only requires pseudorandom generators. The second phase is a single-server PIR protocol on only N1/2 indices, for which we can use the protocol by D\"ottling et al. (CRYPTO 2019) based on DDH, or, for practical purposes, the most concretely efficient single-server PIR protocol. Not only does TreePIR achieve better asymptotics than previous approaches while resting on weaker cryptographic assumptions, but it also outperforms existing two-server PIR protocols in practice. The crux of our protocol is a new cryptographic primitive that we call weak privately puncturable pseudorandom functions, which we believe can have further applications.

Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM '96), is a primitive that allows a client to perform RAM computations on a server without revealing any information about the underlying data, even via the access pattern. For a database of size $N$, well-known lower bounds show that a multiplicative overhead of $\Omega(\log N)$ in the number of RAM operations is necessary assuming $O(1)$ client storage. A long sequence of works culminated in the asymptotically optimal construction of Asharov, Komargodski, Lin, and Shi (CRYPTO 2021) with $O(\log N)$ worst-case overhead and $O(1)$ client storage. However, this optimal ORAM construction is only known to be secure in the \emph{semi-honest} setting, where an adversary is allowed to observe the access patterns but not modify the contents of the database. In the \emph{malicious} setting, if an adversary is allowed to tamper with the database, this construction and many others in fact become insecure. In this work, we construct an ORAM protocol with worst-case $O(\log N)$ overhead and $O(1)$ client storage that protects against tampering adversaries. By the $\Omega(\log N)$ ORAM lower bound, our construction is asymptotically optimal. We can also interpret our construction as an online memory checker that matches the bandwidth of the best known online memory checkers while additionally hiding the access pattern. To achieve this, we intricately interleave the ORAM construction of Asharov et al. with online and offline memory checking techniques.

Federated Learning with Formal User-Level Differential Privacy Guarantees

Abhradeep Guha Thakurta

In this talk I will discuss the algorithmic research that led to the deployment of the first production ML model using federated learning with a rigorous differential privacy guarantee. Along the way, I will highlight the systemic challenges that drove the algorithmic design.

The talk will primarily focus on the DP-FTRL algorithm (a differentially private variant of follow-the-regularized-leader), that was developed during this research effort. I will provide both theoretical and empirical insights into the efficacy of DP-FTRL. In particular, I will show that DP-FTRL compares favorably to DP-SGD (differentially private stochastic gradient descent), but does not rely on privacy amplification by sampling (a crucial component in providing strong privacy/utility trade-offs while operating with minibatch gradients). In comparison to DP-SGD, this allows DP-FTRL to be amenable to more flexible data access patterns, which is crucial in our federated learning deployment.

Bio sketch: Abhradeep Guha Thakurta is a staff research scientist at Google Research - Brain Team. His primary research interest is in the intersection of data privacy and machine learning. He focuses on demonstrating, both theoretically and in practice, that it is possible to design differentially private learning algorithms that can scale to industrial workloads. Prior to Google, Abhradeep was a faculty at UC Santa Cruz. And even before that he worked at Apple and Yahoo Labs as research scientists. He did his Ph.D. from The Pennsylvania State University in 2013, and his postdoctoral appointment was with Stanford University and Microsoft Research Silicon Valley Campus.

Can we design a private variant of "Google search" that would enable users to search the Internet privately without revealing their queries to Google? Fully homomorphic encryption gives a solution to this problem, where users would send their search queries to Google encrypted in a way that allows Google to compute the encrypted search results, without learning anything about the query itself. However, this solution would require Google to run a huge computation that touches the entire content of the Internet to answer each encrypted search query, making it realistically infeasible! Is there a way for Google to preprocess the Internet content into a special data structure that would allow it to answer any encrypted query efficiently by only accessing some small number of locations in the data structure? We give a solution to this problem as a special case of our work.

Concretely, we construct the first schemes for "doubly efficient private information retrieval" and "fully homomorphic encryption in the RAM model" under standard cryptographic hardness assumptions (Ring Learning with Errors). Previously we only had heuristic candidate constructions that only satisfied relaxed variants of these primitives. Our new solutions combine tools from cryptography together with data structures for fast polynomial evaluation (Kedlaya and Umans '08).

This talk consists of an introductory and a technical part. In the introduction, we will present our results at a high level that shows the new feasibility and our key insights. In the technical part, we will summarize our solutions and continue to elaborate on the details such as our algorithms and the comparison to previous results.

Joint work with Daniel Wichs.

Private-information-retrieval protocols allow a client to query a database server without revealing its query to the server. With private information retrieval, a client can, for example, download an article from Wikipedia without revealing which article it fetched. Existing schemes for private information retrieval require large amounts of computation -- both asymptotically and concretely. Computational cost is the major barrier to the use of private information retrieval in practice.

This talk will present SimplePIR, the fastest single-server private-information-retrieval scheme known to date. SimplePIR improves the performance of state-of-the-art schemes by over 30x and can answer private queries to a 1 GB database in under 100ms using a single CPU core. The scheme is simple to describe and easy to implement, which gives us hope that it will see adoption in industry. After describing the new scheme, I will discuss how we might apply it to strengthen privacy protections in the Chrome web browser. The talk will conclude with a discussion of the scheme's limitations, along with a few open problems in the area.

This talk is based on joint work with Alexandra Henzinger (MIT), Matthew Hong (MIT), Sarah Meiklejohn (Google), and Vinod Vaikuntanathan (MIT).

Consensus protocols allow a set of devices connected via a network to agree on a common output, even if a fraction of those devices are faulty. Protocols are typically designed with a particular network model in mind; for example, in the synchronous network model, messages between devices are assumed to arrive within some fixed, known delay. We know from various existing lower bounds that our choice of network model (along with some other factors, such as cryptographic assumptions) determines bounds on security and performance, with stronger assumptions usually allowing us to tolerate more faults or terminate after fewer rounds. However, protocols that rely on stronger assumptions may be entirely insecure if those assumptions fail in practice. In this talk, I will discuss recent work on protocols that achieve better security or performance in good case scenarios (under strong assumptions) while still guaranteeing some level of security and performance in worst case scenarios (under weaker assumptions).

Time-Space tradeoffs for short collisions in Sponge and Merkle-Damgard hashing

Ashrujit Ghoshal

We study the power of preprocessing adversaries in finding bounded-length collisions in the widely used Merkle-Damgård (MD) and Sponge hashing in the random oracle and random permutation models respectively. Specifically, we consider adversaries with arbitrary S-bit advice about the random oracle/permutation and can make at most T queries to it. Our goal is to characterize the advantage of such adversaries in finding a B-block collision in MD/Sponge hash functions.

For MD, the answer to this question is completely understood for very large values of B and for B=1,2. For B=\Omega(T), Coretti et al. (EUROCRYPT ’18) gave matching upper and lower bounds of \Theta(ST^2/N) where the range of the random oracle is of size N. Akshima et al. (CRYPTO ’20) observed that the attack of Coretti et al. could be adapted to work for any value of B>1, giving an attack with advantage \Omega(STB/N+T^2/N). Unfortunately, they could only prove that this attack is optimal for B=2. They formulated the STB conjecture, stating that the best-possible advantage of any attack for finding B block collisions is O(STB/N+T^2/N) for any B>1. In this work, we confirm the STB conjecture in many new parameter settings.

The analogous question for short collisions in the Sponge construction had not been studied at all. We initiate this study by giving two new attacks- an attack that is the analogue of the attack by Akshima et al. for B-block collisions in MD, and another attack for B=1 that crucially relies on the fact that an adversary can make inverse queries to the underlying permutation in the sponge construction. An analogue of the latter attack does not provably exist for MD, and to the best of our knowledge, this is the first natural application for which sponge hashing is provably less secure than the corresponding instance of Merkle-Damgård hashing. We complement the above attacks with bounds on the best possible attacks. Specifically, we prove that there is a qualitative jump in the advantage of best possible attacks for finding unbounded-length collisions and those for finding very short collisions.

In an era of unprecedented access to personal or sensitive data, the following questions are becoming increasingly relevant.

Can we request that encrypted data stored on the cloud be provably deleted?

Can we outsource encrypted computation or model training to a server, and after computations, request the server to erase this data?

Can multiple parties compute a joint function on their private data, and later verify that their secret inputs were deleted?

On classical devices, data is stored and exchanged as a string of bits. There is no way to prevent an untrusted device with access to such a string from making and storing arbitrarily many copies of it. Thus, it seems hopeless to try to force an untrusted device to delete classical data. However, the uncertainty principle of quantum mechanics provides hope that these “certified deletion” tasks are physically realizable if information is encoded in quantum states.

In this talk, we will demonstrate a novel proof technique that gives rise to simple and natural quantum-cryptographic protocols for achieving each of the above tasks. We will guarantee that once an adversary generates a certificate of deletion, any information previously held by the adversary has been statistically removed from their view and cannot be recovered even given any secret keys and/or unbounded computational resources.

The beautiful work of Applebaum, Ishai, and Kushileviz [FOCS’11] initiated the study of arithmetic variants of Yao’s garbled circuits. An arithmetic garbling scheme is an efficient transformation that converts an arithmetic circuit C: R^n --> R^m over a ring R into a garbled circuit \hat C and n affine functions Li for i ∈ [n], such that \hat C and Li(xi) reveals only the output C(x) and no other information of x. AIK presented the first arithmetic garbling scheme supporting computation over integers from a bounded (possibly exponentially large) range, based on Learning With Errors (LWE). In contrast, converting C into a Boolean circuit and applying Yao’s garbled circuit treat the inputs as bit strings instead of ring elements, and hence is not “arithmetic”.

In this work, we present new ways to garble arithmetic circuits, which improve the state-of-the-art on efficiency, modularity, and functionality. To measure efficiency, we define the rate of a garbling scheme as the maximal ratio between the bit-length of the garbled circuit |\hat C| and that of the computation tableau |C|k in the clear, where k is the bit length of wire values (e.g., Yao’s garbled circuit has rate O(λ)).

– We present the first constant-rate arithmetic garbled circuit for computation over large integers based on the Decisional Composite Residuosity (DCR) assumption, significantly improving the efficiency of the schemes of Applebaum, Ishai, and Kushilevitz.

– We construct an arithmetic garbling scheme for modular computation over R = Zp for any integer modulus p, based on either DCR or LWE. The DCR-based instantiation achieves rate O(λ) for large p. Furthermore, our construction is modular and makes black-box use of the underlying ring and a simple key extension gadget.

– We describe a variant of the first scheme supporting arithmetic circuits over bounded integers that are augmented with Boolean computation (e.g., truncation of an integer value, and comparison between two values), while keeping the constant rate when garbling the arithmetic part.

To the best of our knowledge, constant-rate (Boolean or arithmetic) garbling was only achieved before using the powerful primitive of indistinguishability obfuscation, or for restricted circuits with small depth.

Based on joint work with Marshall Ball (NYU), Hanjun Li (UW), Tianren Liu (Peking University)

Distributed (Correlation) Samplers: How to Remove a Trusted Dealer in One Round

Damiano Abram

Structured random strings (SRSs) and correlated randomness are important for many cryptographic protocols. In settings where interaction is expensive, it is desirable to obtain such randomness in as few rounds of communication as possible; ideally, simply by exchanging one reusable round of messages which can be considered public keys. In this seminar, we describe how to generate any SRS or correlated randomness in such a single round of communication, using, among other things, indistinguishability obfuscation. We introduce what we call a distributed sampler, which enables n parties to sample a single public value (SRS) from any distribution. We construct a semi-honest distributed sampler in the plain model and we upgrade it to active security by relying on a random oracle. We use these constructions to build public key PCFs. A public-key PCF can be thought of as a distributed correlation sampler; instead of producing a public SRS, it gives each party a large amount of private random values satisfying some correlation. In our constructions, the size of the public keys is logarithmic in the amount of correlated material we produce.

Improving the running time of the prover is a central goal in the area of succinct arguments. In this talk we will trace through a line of works [BCGGHJ17, BCG20, BCL20] that successfully construct succinct arguments that have linear-time provers and are also zero-knowledge. The result is a direct consequence of a new interactive oracle proof (IOP) that achieves linear-time proving, polylogarithmic verification, and zero knowledge. We will focus on the construction of this IOP in this talk. This is based on joint work with Alessandro Chiesa and Jonathan Bootle.

We consider the market microstructure of Constant Function Market Makers (CFMMs) from the economic perspective of passive liquidity providers (LPs). In a frictionless, continuous-time setting and in the absence of fees, we decompose the return of an LP into an instantaneous market risk component and a non-negative, non-decreasing, and locally predictable component which we call "loss-versus-rebalancing" (LVR, pronounced "lever"). Even though market risk can be fully hedged, LVR remains as a running cost to LPs that must be offset by fee income in order for liquidity provision to be profitable. We show how LVR can be interpreted in many ways: as the cost of pre-commitment, as the time value for giving up future optionality, as the compensator in a Doob-Meyer decomposition, as an adverse selection cost in the form of the profits of arbitrageurs trading against the pool, and as an information cost because the pool does not have access to accurate market prices. LVR is distinct from the more commonly known metric of "impermanent loss" or "divergence loss"; this latter metric is more fundamentally described as "loss-versus-holding" and is not a true running cost. We express LVR simply and in closed form: instantaneously, it is the scaled product of the variance of prices and the marginal liquidity available in the pool. As such, LVR is easily calibrated to market data and specific CFMM structure. LVR provides tradeable insight in both the ex ante and ex post assessment of CFMM LP investment decisions, and can inform the design of CFMM protocols.

This talk is based on joint work with Ciamac C. Moallemi (Columbia Graduate School of Business), Tim Roughgarden (Columbia CS/a16z Crypto), and Anthony Lee Zhang (Chicago Booth).

Full paper: https://arxiv.org/pdf/2208.06046.pdf

Zero-knowledge proof is a powerful cryptographic tool to establish trust without revealing any sensitive information. It allows one party to convince others that a claim about the properties of some secret data is true, while the data remains confidential. Zero-knowledge proof has been widely used in blockchains and crypto-currencies to achieve privacy and improve scalability. It can also be applied to prove the correctness of computations in cloud computing and machine learning.

In this talk, I will present my research in this area to bring zero-knowledge proof from theory to practice. I will talk about a new framework to build general-purpose zero-knowledge proofs for any computation. In this framework, we were able to build the first zero-knowledge proof scheme with the optimal running time. The scheme is based on the techniques of interactive proofs and error-correcting codes. In the second part, I will talk about our recent works on new applications of zero-knowledge proofs in machine learning integrity and fairness. The scalability and efficiency of the schemes can be further improved for these applications.

Characterizing the optimal round and communication complexity of secure computation is essential to minimize security overhead when computing over distributed data. The seminal results of Chor-Kushilevitz-Beaver (STOC--1989, FOCS--1989, DIMACS--1989) characterize all two-party computations with deterministic output that admit secure protocols. However, the precise round and communication complexity have remained unexplored for secure protocols of computations with randomized output. The space of all candidate private-coin secure protocols has an intricate structure that confounds intuition and has been challenging to reason.

Our work resolves this problem for two-party secure function evaluation functionalities with randomized output. We introduce an innovative geometric encoding of all candidate secure protocols for a given computation as points in a high-dimensional space. Next, we analyze the properties of these geometric sets of points using a real algebraic geometry toolkit and demonstrate their tameness. Consequently, the following decidability, search, and optimization results follow.

1. Determining whether a given computation admits a secure protocol within round or communication constraints is decidable.

2. If there is such a protocol, we can construct one such protocol.

3. Otherwise, we present a geometric obstruction to achieving security.

Tight new information complexity bounds for secure computation follow as corollaries of our technical contributions. We demonstrate the expressive power of our results by unifying the current state-of-the-art. The geometric sets we study are new and natural generalizations of the convex hull of points, motivating new foundational research in real algebraic geometry and topology.

In the quantum random oracle model (QROM) introduced by Boneh et al. (Asiacrypt 2011), a hash function is modeled as a uniformly random oracle, and a quantum algorithm can only interact with the hash function in a black-box manner. QRO methodology captures all generic algorithms. However, they fail to describe non-uniform quantum algorithms with preprocessing power, which receives a piece of bounded classical or quantum advice.

In the classical setting, non-uniform security has been widely studied and understood. Starting from the work by Nayebi, Aaronson, Belovs, and Trevisan (QIC 2015), a line of works investigates non-uniform security in the QROM. In this talk, we will go over the recent development and show that even quantum advice is almost as good/bad as classical advice for many natural security games in the QROM, improved the bounds by Chung et al. (FOCS 2020). For example,

(*) Even with S qubits of quantum advice, a T-query quantum algorithm has an advantage at most O((ST+T^2)/N) to invert a random function with domain and range size $N$. It is tight in light of the barrier by Corrigan-Gibbs and Kogan (TCC 2019).

(*) The commonly used mechanism in cryptography, salting, ``defeats'' preprocessing, even with quantum advice.

Finally, we show that for some contrived games in the QROM, quantum advice can be exponentially better than classical advice for some parameter regimes. To our best knowledge, it provides the first evidence of a general separation between quantum and classical advice relative to an unstructured oracle.

Towards a Theory of Maximal Extractable Value: Constant Function Market Makers

Kshitij Kulkarni

Maximal Extractable Value (MEV) represents excess value captured by miners (or validators) from users in a cryptocurrency network or decentralized exchange. This excess value often comes from reordering users' transactions to maximize fees or inserting new transactions that allow a miner to front-run users’ transactions. The most common type of MEV involves what is known as a sandwich attack against a user trading on a popular class of automated market makers known as constant function market makers (CFMMs). We analyze game theoretic properties of MEV in CFMMs that we call routing and reordering MEV.

In the case of routing, we provide a number of examples where the existence of MEV degrades and counterintuitively improves the social welfare. We construct an analogue of the price of anarchy for this setting and demonstrate that if the impact of a sandwich attack is localized in a suitable sense, then the price of anarchy is bounded. In the case of reordering, we show conditions when MEV caused by sandwich attacks is bounded by the transaction drift and grows O(log n) in the number of transactions, and when it is optimal to batch sandwich attacks versus split them. Combined, our results provide improvements that both MEV searchers and CFMM designers can utilize for estimating costs and profits of MEV.

Applications today rely on cloud databases for storing and querying time-series data. While outsourcing storage is convenient, this data is often sensitive, making data breaches a serious concern. We present Waldo, a time-series database with rich functionality and strong security guarantees: Waldo supports multi-predicate filtering, protects data contents as well as query filter values and search access patterns, and provides malicious security in the 3-party honest-majority setting. In contrast, prior systems such as Timecrypt and Zeph have limited functionality and security: (1) these systems can only filter on time, and (2) they reveal the queried time interval to the server. Oblivious RAM (ORAM) and generic multiparty computation (MPC) are natural choices for eliminating leakage from prior work, but both of these are prohibitively expensive in our setting due to the number of roundtrips and bandwidth overhead, respectively. To minimize both, Waldo builds on top of function secret sharing, enabling Waldo to evaluate predicates without client interaction. We develop new techniques for applying function secret sharing to the encrypted database setting where there are malicious servers, secret inputs, and chained predicates. With 32-core machines, Waldo runs a query with 8 range predicates over records in 3.03s, compared to 12.88s for an MPC baseline and 16.56s for an ORAM baseline. Compared to Waldo, the MPC baseline uses 9 − 82× more bandwidth between servers (for different numbers of records), while the ORAM baseline uses 20 − 152× more bandwidth between the client and server(s) (for different numbers of predicates). This talk is based on joint work with Mayank Rathee, Raluca Ada Popa, and Ion Stoica that appeared at IEEE S&P'22.

Single-Server Private Information Retrieval with Sublinear Amortized Time

Alexandra M Henzinger

This talk will present new private-information-retrieval protocols in the single-server setting. These schemes allow a client to privately fetch a sequence of database records from a server, while the server answers each query in average time sublinear in the database size.

Specifically, we introduce the first single-server private-information-retrieval schemes that have sublinear amortized server time, require sublinear additional storage, and allow the client to make her queries adaptively. Our protocols rely only on standard cryptographic assumptions (decision Diffie-Hellman, quadratic residuosity, learning with errors, etc.). They work by having the client first fetch a small “hint” about the database contents from the server. Generating this hint requires server time linear in the database size. Thereafter, the client can use the hint to make a bounded number of adaptive queries to the server, which the server answers in sublinear time — yielding sublinear amortized cost.

Finally, we give lower bounds proving that our most efficient scheme is optimal with respect to the trade-off it achieves between server online time and client storage.

This talk is based on joint work with Henry Corrigan-Gibbs and Dmitry Kogan.

A secret-sharing scheme allows to distribute a secret among n parties such that only some predefined ``authorized'' sets of parties can reconstruct the secret, and all other ``unauthorized'' sets learn nothing about it. For over 30 years, it was known that any (monotone) collection of authorized sets can be realized by a secret-sharing scheme whose shares are of size $2^{n-o(n)}$ and until recently no better scheme was known. Exciting recent developments, starting with the work of Liu and Vaikuntanathan (STOC 2018), have significantly reduced the share size accumulating in an upper bound of $1.5^{n+o(n)}$ (Applebaum and Nir, CRYPTO 2021). Following these advances, it is natural to ask whether these new approaches can lead to a truly sub-exponential upper-bound of $2^{n^{1-\epsilon}}$ for some constant $\epsilon>0$, or even all the way down to polynomial upper-bounds.

In this talk, we relate this question to the complexity of computing monotone Boolean functions by monotone real circuits (MRCs) -- a computational model that was introduced by Pudl\'{a}k (J. Symb. Log., 1997) in the context of proof complexity. We use this connection to obtain lower bounds against a natural family of secret-sharing schemes, as well as new non-trivial upper bounds for MRCs. Specifically, we conclude that recent approaches for secret-sharing schemes cannot achieve sub-exponential share size and that every monotone function can be realized by an MRC of complexity $1.5^{n+o(n)}$. To the best of our knowledge, this is the first improvement over the trivial $2^{n-o(n)}$ upper-bound. Along the way, we show that the recent constructions of general secret-sharing schemes implicitly give rise to Boolean formulas over slice functions and prove that such formulas can be simulated by MRCs of similar size. On a conceptual level, our paper continues the rich line of study that relates the share size of secret-sharing schemes to monotone complexity measures.

Based on joint work with Amos Beimel, Oded Nir, Naty Peter, and Toniann Pitassi.

Constructing space-efficient proof systems from different cryptographic assumptions

Jonathan Bootle

Various recent works study proof systems with streaming prover algorithms, that run using a very small amount of memory. In this talk, I will explain how these proof systems work and what they have in common. Further, I will discuss how far the same techniques generalize to post-quantum cryptographic assumptions.

Transaction fee mechanism (TFM) is an essential component of a blockchain protocol. However, a systematic evaluation of the real-world impact of TFMs is still absent. Using rich data from the Ethereum blockchain, mempool, and exchanges, we study the effect of EIP-1559, one of the first deployed TFMs that depart from the traditional first-price auction paradigm. We conduct a rigorous and comprehensive empirical study to examine its causal effect on blockchain transaction fee dynamics, transaction waiting time, and consensus security. Our results show that EIP-1559 improves the user experience by mitigating intra-block difference of gas price paid and reducing users’ waiting times. However, EIP-1559 has only a small effect on gas fee levels and consensus security. In addition, we found that when Ether's price is more volatile, the waiting time is significantly higher. We also verify that a larger block size increases the presence of siblings. These findings suggest new directions for improving TFM.

In this seminar, we reflect on the interdisciplinary methodology at the interplay of Computer Science and Economics. We then demonstrate how the approach generally applies to study mechanism design on the Ethereum blockchain. Furthermore, we connect the research to the most recent interdisciplinary research in finance, blockchain security, and computational economics. We conclude with continued research supported by multiple programs, including Ethereum Academic grant, and shed light on future research directions.

How to Compress Encrypted Data

Mark Simkin

Aug 11, 2022

Consider the following simple problem:

Given an (additively homomorphic) encrypted vector of length n and with at most t non-zero entries, by how much and how efficiently can this vector be compressed? This problem is at the core of many different topics, such as encrypted search protocols or anonymous message delivery systems.

In our work, we present multiple solutions to this problem. We show that one can compress the whole vector into O(t) ciphertexts at the cost of slightly higher computational complexities during (de)compression. Our main result is a new algorithm that compresses the vector into O(t * c / log t) ciphertexts, where 2^{-c} is the correctness error, while allowing for extremely efficient (de)compression algorithms. We discuss the practical implications of our work and conclude with some open questions.

Joint work with Nils Fleischhacker and Kasper Green Larsen

We present a rate-1 construction of a publicly verifiable non-interactive argument system for batch-NP (also called a BARG), under the LWE assumption. In our system, a proof corresponding to a batch of k NP statements has size m + poly(lambda, log k), where m is the length of one witness and lambda is the security parameter. In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size at least m * poly(lambda) [KPY19, CJJ21]. We show how to use our rate-1 BARG scheme to obtain the following results under the LWE assumption (which were previously only known in the random oracle model or under non-standard knowledge assumptions): a multi-hop BARG scheme for NP, a multi-hop aggregate signature scheme, and an incrementally verifiable computation (IVC) scheme for arbitrary deterministic computations (beyond P).

Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling

Vitaly Feldman

Recent work of Erlingsson et al (2019) demonstrates that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously and has led to significant interest in the shuffle model of privacy.

We give the first asymptotically optimal bounds on the approximate DP and Renyi DP guarantees satisfied by the algorithm that randomly shuffles the inputs to differentially private local randomizers. Our result is based on a new approach that is simpler than previous work and extends to approximate differential privacy with nearly the same guarantees. Importantly, our work also yields an algorithm for deriving tighter bounds on the resulting differential privacy guarantees. We show that our algorithm gets to within a small constant factor of the optimal bound.

Based on joint works with Audra McMillan and Kunal Talwar.

Payment channel networks (PCNs) provide a faster and cheaper alternative to transactions recorded on the blockchain. Clients can trustlessly establish payment channels with relays by locking coins and then send signed payments that shift coin balances over the network’s channels. Although payments are never published, anyone can track a client’s payment by monitoring changes in coin balances over the network’s channels. We present Twilight, the first PCN that provides a rigorous differential privacy guarantee to its users. Relays in Twilight run a noisy payment processing mechanism that hides the payments they carry. This mechanism increases the relay’s cost, so Twilight combats selfish relays that wish to avoid it using a trusted execution environment (TEE) that ensures they follow its protocol. The TEE does not store the channel’s state, which minimizes the trusted computing base. Crucially, Twilight ensures that even if a relay breaks the TEE’s security, it cannot break the integrity of the PCN. We analyze Twilight in terms of privacy and cost and study the trade-off between them. We implement Twilight using Intel’s SGX framework and evaluate its performance using relays deployed on two continents. We show that a route consisting of 4 relays handles 820 payments/sec

We propose a mechanism for generating and manipulating protein polymers to obtain a new type of consumable storage that exhibits intriguing cryptographic “self-destruct” properties, assuming the hardness of certain polymer-sequencing problems.

To demonstrate the cryptographic potential of this technology, we first develop a formalism that captures (in a minimalistic way) the functionality and security properties provided by the technology. Next, using this technology, we construct and prove security of two cryptographic applications that are currently obtainable only via trusted hardware that implements logical circuitry (either classical or quantum). The first application is a password-controlled secure vault where the stored data is irrecoverably erased once a threshold of unsuccessful access attempts is reached. The second is (a somewhat relaxed version of) one time programs, namely a device that allows evaluating a secret function only a limited number of times before self-destructing, where each evaluation is made on a fresh user-chosen input.

Finally, while our constructions, modeling, and analysis are designed to capture the proposed polymer-based technology, they are sufficiently general to be of potential independent interest.

This is a joint work with Ran Canetti, Yaniv Erlich, Jonathan Gershoni, Tal Malkin, Itsik Pe’er, Anna Roitburd-Berman & Eran Tromer

In private information retrieval (PIR), a client wants to retrieve a record from a database server without revealing to the server which record they retrieved. In the single-server setting we examine, the server should not learn this information even if it acts maliciously. In this work, we show how to compose lattice-based homomorphic encryption schemes to achieve significant improvements in the communication and computational costs of PIR. We introduce new ciphertext translation techniques to convert between Regev and Gentry-Sahai-Waters encodings, an improved modulus switching routine, and an automatic parameter selection system. A Wikipedia demo of our work is available at https://spiralwiki.com.

Across a broad range of database configurations, the basic version of Spiral simultaneously achieves at least a 4.5x reduction in query size, 1.5x reduction in response size, and 2x increase in server throughput compared to previous systems. A variant of our scheme, SpiralStreamPack, is optimized for the streaming setting and achieves a server throughput of 1.9 GB/s for databases with over a million records (compared to 200 MB/s for previous systems) and a rate of 0.81 (compared to 0.24 for previous systems). For streaming large records (e.g., a private video stream), we estimate the monetary cost of SpiralStreamPack to be only 1.9x greater than that of the no-privacy baseline where the client directly downloads the desired record.

This is joint work with David Wu.

We introduce a new class of succinct arguments, that we call elastic. Elastic SNARKs allow the prover to allocate different resources (such as memory and time) depending on the execution environment and the statement to prove. The resulting output is independent of the prover’s configuration. To study elastic SNARKs, we extend the streaming paradigm of [Block et al., TCC’20]. We provide a definitional framework for elastic polynomial interactive oracle proofs for R1CS instances and design a compiler which transforms an elastic PIOP into a preprocessing argument system that supports streaming or random access to its inputs. Depending on the configuration, the prover will choose different trade-offs for time (either linear, or quasilinear) and memory (either linear, or logarithmic).

We prove the existence of elastic SNARKS by presenting Gemini, a novel FFT-free preprocessing argument. We prove its security and develop a proof-of-concept implementation in Rust based on the arkworks framework. We provide benchmarks for large R1CS instances of tens of billions of gates on a single machine.

Differential privacy (DP) is a formal notion of privacy for algorithms. Traditionally, the study of DP focused on two main models: the central model and the local model. The former requires a trusted curator but provides good utility guarantees, whereas the latter does not require any trusted curator but incurs larger errors. In recent years, the shuffle model of DP has emerged as an intermediate option--requiring less trust than the central model but yielding better accuracy than the local model. In this talk, I will describe a few protocols for aggregation in the shuffle model which achieves similar accuracy guarantees as in the central model while also having small communication overhead.

Based on joint work with Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Amer Sinha and Ameya Velingker

Existing quantum compilers optimize quantum circuits by applying circuit transformations designed by experts. This approach requires significant manual effort to design and implement circuit transformations for different quantum devices, which use different gate sets, and can miss optimizations that are hard to find manually. We propose Quartz, a quantum circuit superoptimizer that automatically generates and verifies circuit transformations for arbitrary quantum gate sets. For a given gate set, Quartz generates candidate circuit transformations by systematically exploring small circuits and verifies the discovered transformations using an automated theorem prover. To optimize a quantum circuit, Quartz uses a cost-based backtracking search that applies the verified transformations to the circuit. Our evaluation on three popular gate sets shows that Quartz can effectively generate and verify transformations for different gate sets. The generated transformations cover manually designed transformations used by existing optimizers and also include new transformations. Quartz is therefore able to optimize a broad range of circuits for diverse gate sets, outperforming or matching the performance of hand-tuned circuit optimizers.

Spreading the Privacy Blanket: Differentially Oblivious Shuffling for Differential Privacy

Mingyu Liang

In the shuffle model for differential privacy, n users locally randomize their data and submit the results to a trusted ``shuffler'' who mixes the results before sending them to a server for analysis. This is a promising model for real-world applications of differential privacy, as several recent results have shown that, in some cases, the shuffle model offers a strictly better privacy/utility tradeoff than what is possible in a purely local model.

A downside of the shuffle model is its reliance on a trusted shuffler, and it is natural to try to replace this with a distributed shuffling protocol run by the users themselves. While it would of course be possible to use a fully secure shuffling protocol, one might hope to instead use a more-efficient protocol having weaker security guarantees.

In our work, we consider a relaxation of secure shuffling called differential obliviousness that we prove suffices for differential privacy in the shuffle model. We also propose a differentially oblivious shuffling protocol based on onion routing that requires only O(n \log n) communication while tolerating any constant fraction of corrupted users. We show that for practical settings of the parameters, our protocol outperforms existing solutions to the problem.

Link:

We present a lower bound for the static cryptographic data structure problem of single-server private information retrieval (PIR). PIR considers the setting where a server holds a database of n entries and a client wishes to privately retrieve the i-th entry without revealing the index i to the server. In our work, we focus on PIR with preprocessing where an r-bit hint may be computed in a preprocessing stage and stored by the server to be used to perform private queries in expected time t. We consider the public preprocessing setting of Beimel et al. [JoC, 2004] where the hint is publicly available to everyone including the adversary.

We prove that for any single-server computationally secure PIR with preprocessing it must be that tr = Ω(n log n) when r = Ω(log n). If r = O(log n), we show that t = Ω(n). Our lower bound holds even when the scheme errs with probability 1/n2 and the adversary’s distinguishing advantage is 1/n. Our work improves upon the tr = Ω(n) lower bound of Beimel et al. [JoC, 2004]. We prove our lower bound in a variant of the cell probe model where only accesses to the memory are charged cost and computation and accesses to the hint are free. Our main technical contribution is a novel use of the cell sampling technique (also known as the incompressibility technique) used to obtain lower bounds on data structures. In previous works, this technique only leveraged the correctness guarantees to prove lower bounds even when used for cryptographic primitives. Our work combines the cell sampling technique with the privacy guarantees of PIR to construct a powerful, polynomial-time adversary that is critical to proving our higher lower bounds.

Modular Design of Secure Group Messaging Protocols and the Security of MLS

Yiannis Tselekounis

Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage.

SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm in particular and SM as a cryptographic primitive in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group, and we formally capture its exact security as a so-called continuous group key agreement (CGKA) protocol. Furthermore, we formally capture the security of full SGM protocols by defining a corresponding security game, which is parametrized by a safety predicate that characterizes the exact level of security achieved by a construction. Then, we

cast MLS as an SGM protocol, showing how to modularly build it from the following three main components (and some additional standard cryptographic primitives) in a black-box fashion: (a) CGKA, (b) forward-secure group AEAD (FS-GAEAD), which is a new primitive and roughly corresponds to an "epoch" of group messaging, and (c) a so-called PRF-PRNG, which is a two-input hash function that is a pseudorandom function (resp. generator with input) in its first (resp. second) input.

Real-world random number generators (RNGs) cannot afford to use (slow) cryptographic hashing every time they refresh their state $R$ with a new entropic input $X$. Instead, they use ``superefficient'' simple entropy-accumulation procedures, such as

\[

R \gets \rot_{\alpha, n}(R) \oplus X,

\]

where $\rot_{\alpha,n}$ rotates an $n$-bit state $R$ by some fixed number $\alpha$. For example, Microsoft's RNG uses $\alpha=5$ for $n=32$ and $\alpha=19$ for $n=64$. Where do these numbers come from? Are they good choices? Should rotation be replaced by a better permutation $\pi$ of the input bits?

In this work we initiate a rigorous study of these pragmatic questions, by modeling the sequence of

successive entropic inputs $X_1,X_2,\ldots$ as \emph{independent} (but otherwise adversarial) samples from some natural distribution family ${\cal D}$.

Our contribution is as follows.

We define 2-monotone distributions as a rich family ${\cal D}$ that includes relevant real-world distributions (Gaussian, exponential, etc.), but avoids trivial impossibility results.

For any $\alpha$ with $\gcd(\alpha,n)=1$, we show that rotation accumulates $\Omega(n)$ bits of entropy from $n$ independent samples $X_1,\ldots,X_n$ from any (unknown) $2$-monotone distribution with entropy $k > 1$.

However, we also show some choices of $\alpha$ perform much better than others for a given $n$. E.g., we show $\alpha=19$ is one of the best choices for $n=64$; in contrast, $\alpha=5$ is good, but generally worse than $\alpha=7$, for $n=32$.

More generally, given a permutation $\pi$ and $k\ge 1$, we define a simple parameter, the covering number $C_{\pi,k}$, and show that it characterizes the number of steps before the rule $$(R_1,\ldots,R_n)\gets (R_{\pi(1)},\ldots, R_{\pi(n)})\oplus X$$ accumulates nearly $n$ bits of entropy from independent, $2$-monotone samples of min-entropy $k$ each.

We build a simple permutation $\pi^*$, which achieves nearly optimal $C_{\pi^*,k}\approx n/k$ for all values of $k$ simultaneously, and experimentally validate that it compares favorably with all rotations $\rot_{\alpha,n}$.

Incompressible encryption allows us to make the ciphertext size flexibly large and ensures that an adversary learns nothing about the encrypted data, even if the decryption key later leaks, unless she stores essentially the entire ciphertext. Incompressible signatures can be made arbitrarily large and ensure that an adversary cannot produce a signature on any message, even one she has seen signed before, unless she stores one of the signatures essentially in its entirety.

We give simple constructions of both incompressible public-key encryption and signatures under minimal assumptions. Furthermore, large incompressible ciphertexts (resp. signatures) can be decrypted (resp. verified) in a streaming manner with low storage. In particular, these notions strengthen the related concepts of disappearing encryption and signatures. We extend our constructions to achieve an optimal "rate", meaning the large ciphertexts (resp. signatures) can contain almost equally large messages, at the cost of stronger assumptions.

In this talk, we show that it is possible to perform $n$ independent copies of $1$-out-of-$2$ oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is $n(1+o(1))$ for sufficiently large $n$. Note that this matches the information-theoretic lower bound. Prior to this work, this was only achievable by using the heavy machinery of rate-$1$ fully homomorphic encryption (Rate-$1$ FHE, Brakerski et al., TCC 2019).

To achieve rate-$1$ both on the receiver's and sender's end, we use the LPN assumption, with slightly sub-constant noise rate $1/m^{\epsilon}$ for any $\epsilon>0$ together with either the DDH, QR or LWE assumptions. In terms of efficiency, our protocols only rely on linear homomorphism, as opposed to the FHE-based solution which inherently requires an expensive ``bootstrapping'' operation. We believe that in terms of efficiency we compare favorably to existing batch-OT protocols, while achieving superior communication complexity. We show similar results for Oblivious Linear Evaluation (OLE).

Joint work with Zvika Brakerski, Nico D{\"o}ttling and Sihang Pu.

The use of cryptography is ubiquitous in our daily life. However, the development of faster quantum computers can break today’s crypto-systems. Therefore, it is imperative that we develop and deploy post-quantum cryptography, before scalable quantum computers become a reality. My research focuses on designing and formally verifying cryptographic primitives based on post-quantum assumptions. My approach combines cryptography and formal methods, aiming to bring provable security to real-world applications. In this talk, I will discuss how to design a post-quantum secure encryption scheme that provides fine-grained access control over encrypted data. Further, I will describe a system called AutoLWE, capable of mechanizing security proofs of lattice-based cryptosystems.

In this talk we are going to present our multi-party Prime Match solution for matching orders in a stock exchange while maintaining the privacy of the orders. Information is revealed only if there is a match. To achieve our solution, we present a new protocol for secure comparison with malicious security. Prime Match is running in production since September 2022.

Algebraic Reductions of Knowledge

Abhiram Kothapalli

December 16, 4:30 pm

Arguments of knowledge are powerful cryptographic primitives that allow a prover to demonstrate that it knows a satisfying witness to a prescribed statement. Tremendous progress has been made in developing efficient argument systems by leveraging homomorphic structure in an increasingly composable and recursive manner. However, the extent to which homomorphisms can be composed and manipulated in the service of efficient argument systems is still not well understood. To this end, we introduce reductions of knowledge, a generalization of arguments of knowledge, which reduce checking a statement in one relation to checking a derived statement in another, and better capture the composable and recursive nature of arguments over homomorphisms. We construct and study the tensor reduction, which is capable of reducing any homomorphic statement composed via the tensor product, and provides knowledge soundness unconditionally when working over vector spaces. We show that tensor reductions generalize a large class of prior recursive techniques including the ubiquitous sumcheck protocol. We additionally show that tensor reductions can be employed to construct reductions of knowledge with logarithmic communication for familiar linear algebraic statements, and in turn, these can be composed to recover a reduction of knowledge for NP with logarithmic communication.

Garbled RAM (GRAM) is a powerful technique that equips Garbled Circuit (GC) with a sublinear cost RAM without adding rounds of interaction. While GRAM constructions are known, none are suitable for practice, due to costs that have high constants and poor scaling.

We present the first GRAM suitable for practice. For computational security parameter $\kappa$ and for a size-n RAM that stores blocks of size $w = \Omega(log^2 n)$ bits, our GRAM incurs only amortized $O(w log^2 n \kappa)$ communication and computation per access. We evaluate the concrete cost of our GRAM; our approach outperforms trivial linear-scan-based RAM for as few as 512 128-bit elements.

The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound.

Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called ``adaptive security'' and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called ``commitment problem'', where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss.

A new natural security notion is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools.

Joint work with Martin Hirt and Ueli Maurer.

Sleepy Channels: Bitcoin-Compatible Bi-directional Payment Channels without Watchtowers

Sri AravindaKrishnan Thyagarajan

Payment channels (PC) are a promising solution to the scalability issue of cryptocurrencies, allowing users to perform the bulk of the transactions off-chain without needing to post everything on the blockchain. Many PC proposals however, suffer from a severe limitation: Both parties need to constantly monitor the blockchain to ensure that the other party did not post an outdated transaction. If this event happens, the honest party needs to react promptly and engage in a punishment procedure. This means that prolonged absence periods (e.g., due to a power outage) may be exploited by malicious users. As a mitigation, the community has introduced watchtowers, a third-party monitoring the blockchain on behalf of off-line users. Unfortunately, watchtowers are either trusted, which is critical from a security perspective, or they have to lock a certain amount of coins, called collateral, for each monitored PC in order to be held accountable, which is financially infeasible for a large network.

We present Sleepy Channels, the first bi-directional PC protocol without watchtowers (or any other third party) that supports an unbounded number of payments and does not require parties to be persistently online. The key idea is to confine the period in which PC updates can be validated on-chain to a short, pre-determined time window, which is where the PC parties have to be online. This behavior is incentivized by letting the parties lock a collateral in the PC, which can be adjusted depending on their mutual trust and which they get back much sooner if they are online during this time window. Our protocol is compatible with any blockchain that is capable of verifying digital signatures (e.g., Bitcoin), as shown by our proof of concept. Moreover, Sleepy Channels impose a communication and computation overhead similar to state-of-the-art PC protocols while removing watchtower's collateral and fees for the monitoring service.

(Im)possibility Results for Transaction Fee Mechanism Design

Hao Chung

In this talk, I will talk about my recent research result with Elaine.

In short, in blockchains such as Bitcoin and Ethereums, users compete in a transaction fee auction to get their transactions confirmed in the next block. A line of recent works set forth the desiderata for a ``dream'' transaction fee mechanism (TFM), and explored whether such a mechanism existed.

In this work, we prove a new impossibility result: assuming finite block size, no single-parameter, non-trivial, possibly randomized TFM can simultaneously satisfy truthful bidding and miner-user side contract proofness. On the other hand, we also give a relaxed version of the player's utility. In this case, we propose a mechanism that satisfies truthful bidding and miner-user side contract proofness.

Abstract:

We prove the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)>2n, the following are equivalent: Cryptographic one-way functions exist; The t-time bounded Kolmogorov Complexity problem is mildly hard-on-average. In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography.

Joint work with Rafael Pass

https://eccc.weizmann.ac.il/report/2020/052/

Abstract:

This work concerns secure protocols in the massively parallel computation (MPC) model, which is one of the most widely-accepted models for capturing the challenges of writing protocols for the types of parallel computing clusters which have become commonplace today (MapReduce, Hadoop, Spark, etc.). Recently, the work of Chan et al.\ (ITCS '20) initiated this study, giving a way to compile any MPC protocol into a secure one in the common random string model, achieving the standard secure multi-party computation definition of security with up to 1/3 of the parties being corrupt.

We are interested in achieving security for much more than 1/3 corruptions. To that end, we give two compilers for MPC protocols, which assume a simple public-key infrastructure, and achieve semi-honest security for all-but-one corruptions. Our first compiler assumes hardness of the learning-with-errors (LWE) problem, and works for any MPC protocol with ``short'' output---that is, where the output of the protocol can fit into the storage space of one machine, for instance protocols that output a trained machine learning model. Our second compiler works for any MPC protocol (even ones with a long output, such as sorting) but assumes, in addition to LWE, indistinguishability obfuscation and a circular secure variant of threshold FHE. Both protocols allow the attacker to choose corrupted parties based on the trusted setup, an improvement over Chan et al., whose protocol requires that the CRS is chosen independently of the attacker's choices.

Abstract:

Broadcast (BC) is a crucial ingredient for a plethora of cryptographic protocols such as secret sharing and multiparty computation. In this paper we apply \emph{gossiping} (propagating a message by sending to a few random parties who in turn do the same, until the message is delivered) to design new randomized BC protocols with improved communication complexity and which are secure against an adversary controlling the majority of parties. We make progress on two fronts. First, we propose a protocol for single-sender BC in the static model of corruption that achieves $\tilde O(n^2 \cdot \kappa^2)$ bits of communication and where no trusted setup is required---parties just need to generate their own cryptographic keys. All prior protocols in this setting exhibit $ O(n^3 \cdot \kappa)$ communication. Using insights from our single-sender BC protocol, we then propose the first adaptively-secure parallel BC protocol with $\tilde O(n^2 \cdot \kappa^4)$ communication complexity, significantly improving existing parallel BC protocols of $\tilde O(n^3)$ communication. To the best of our knowledge, our parallel BC protocol is the first non-trivial one, i.e., one that is not using a single-sender BC protocol $n$ times and in a black box fashion, thus leading to the improved complexity.

Time- and Space-Efficient Arguments from Groups of Unknown Order

Pratik Soni

Abstract:

We construct public-coin time- and space-efficient zero-knowledge arguments for NP. For every time T and space S non-deterministic RAM computation, the prover runs in time T⋅polylog(T) and space S⋅polylog(T), and the verifier runs in time n⋅polylog(T), where n is the input length. Our protocol relies on hidden order groups, which can be instantiated, assuming a trusted setup, from the hardness of factoring (products of safe primes), or, without a trusted setup, using class groups. The argument-system can heuristically be made non-interactive using the Fiat-Shamir transform.

Our proof builds on DARK (Bünz et al., Eurocrypt 2020), a recent succinct and efficiently verifiable polynomial commitment scheme. We show how to implement a variant of DARK in a time- and space-efficient way. Along the way we:

1. Identify a significant gap in the proof of security of DARK. 2. Give a non-trivial modification of the DARK scheme that overcomes the aforementioned gap. The modified version also relies on significantly weaker cryptographic assumptions than those in the original DARK scheme. Our proof utilizes ideas from the theory of integer lattices in a novel way. 3. Generalize Pietrzak's (ITCS 2019) proof of exponentiation (PoE) protocol to work with general groups of unknown order (without relying on any cryptographic assumption).

In proving these results, we develop general-purpose techniques for working with (hidden order) groups, which may be of independent interest.

Reaching Agreement Without Saying Much: Byzantine Agreement with Polylog Bits Per Party

Ran Cohen

Abstract:

Byzantine agreement (BA), the task of n parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is polylog(n) bits per party, but some parties must send \Omega(n) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating O(\sqrt n) bits.

In this talk, we explore whether asymmetry is inherent for optimizing total communication. We show that this is not the case by presenting two BA protocols where every party communicates only polylog(n) bits; the constructions rely on a new flavor of distributed signatures and offer a tradeoff between setup assumptions and cryptographic assumptions. Next, we will discuss limitations and barriers of this approach, and conclude with open questions.

This is a joint work with Elette Boyle and Aarushi Goel.

Abstract:

Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Prior approaches to construct PCD are based on recursive applications of succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this talk I will describe how to obtain PCD without relying on SNARKs. In particular, we construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size proofs) that has a split accumulation scheme, which is a weak form of accumulation that we introduce. We then exploit this new framework to achieve a more efficient PCD construction, by giving an accumulation scheme for a non-interactive argument of knowledge for R1CS with constant verification time. Concretely the recursive circuit can be as small as 3 exponentiations in a group with hard discrete logarithm. We also avoid the use of FFTs and other structures in the cryptographic group.

Our results are supported by a modular and efficient implementation.

Average-case Complexity Through the Lens of Interactive Puzzles

Muthuramakrishnan Venkitasubramaniam

Abstract:

Consider the following two fundamental open problems in complexity theory: (a) Does a hard-on-average language in NP imply the existence of one-way functions?, or (b) Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes.

Both one-way functions and problems in TFNP can be interpreted as promise-true distributional NP search problems---namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words, ”It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true.”

This result follows from a more general study of interactive puzzles---a generalization of average-case hardness in NP—and in particular, a novel round-collapse theorem for computationally-sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information-theoretically sound protocols.

Joint work with Rafael Pass

Abstract:

Efficient, leakage-free search on encrypted data has remained an unsolved problem for the last two decades; efficient schemes are vulnerable to leakage-abuse attacks, and schemes that eliminate leakage are impractical to deploy. To overcome this tradeoff, we reexamine the system model. We surveyed five companies providing end-to-end encrypted filesharing to better understand what they require from an encrypted search system. Based on our findings, we design and build DORY, an encrypted search system that addresses real-world requirements and protects search access patterns; namely, when a user searches for a keyword over the files within a folder, the server learns only that a search happens in that folder, but does not learn which documents match the search, the number of documents that match, or other information about the keyword. DORY splits trust between multiple servers to protect against a malicious attacker who controls all but one of the servers. We develop new cryptographic and systems techniques to meet the efficiency and trust model requirements outlined by the companies we surveyed. We implement DORY and show that it performs orders of magnitude better than a baseline built on ORAM. Parallelized across 8 servers, each with 16 CPUs, DORY takes 116ms to search roughly 50K documents and 862ms to search over 1M documents.

Abstract:

An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (J. ACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e., for every input the observed locations accessed are similarly distributed. In recent years there has been great progress both in terms of upper bounds as well as in terms of lower bounds, essentially pinning down the smallest overhead possible in various settings of parameters.

In this talk, I will discuss a very natural setting of parameters in which no non-trivial lower bound is known, even not ones in restricted models of computation (like the so called balls and bins model). Let N and w be the number of cells and bit-size of cells, respectively, in the RAM that we wish to simulate obliviously. Denote by b the cell bit-size of the ORAM. All previous ORAM lower bounds have a multiplicative w/b factor which makes them trivial in many settings of parameters of interest. In this talk, I will show a new ORAM lower bound that captures this setting (and in all other settings it is at least as good as previous ones, quantitatively). That is, any ORAM must make (amortized) Ω(log(Nw) / log(b/w))

memory probes for every logical operation. Our lower bound implies that logarithmic overhead in accesses is necessary, even if b >> w. Our lower bound is tight for settings of parameters, up to the log(b/w) factor. Our bound also extends to the non-colluding multi-server setting.

This is a joint work with Ilan Komargodski.

Power in Weakness: Efficient Perfectly Secure Multiplication with Optimal Resilience and Constant Time

Gilad Asharov

Abstract:

Secure computation enables $n$ mutually distrustful parties to compute a function over their private inputs jointly. A fundamental result in the area is that of Ben-Or, Goldwasser, and Wigderson (BGW) in 1988, showing that any function can be computed with perfect security in the presence of a malicious adversarial entity controlling at most $t< n/3$ parties.

A crucial part in the construction of BGW is a protocol for multiplying two shared values. Despite 30 years of research, the protocol requires sharing a total of $O(n^2)$ values per multiplication. In contrast, the semi-honest protocol of BGW and comparable protocols in the computational settings that are based on secret sharing require the sharing of $O(n)$ values for a single multiplication. In this paper close this gap, leading to a more efficient construction while maintaining a round complexity that is constant per multiplication. Our result is obtained by exploring the limits of verifiable secret sharing and constructing a protocol for weak verifiable secret sharing of a polynomial of degree-$2t$ with the same communication complexity and round complexity as strong verifiable secret sharing of a polynomial of degree-$t$.

Besides efficiency improvements, our protocol overall simplifies the BGW construction, which has also pedagogical significance due to its centrality. In addition, we show how our new approach improves the efficiency of depth-$1$ computations, e.g., matrix multiplications.

Joint work with: Ittai Abraham and Avishay Yahai

Abstract:

Since the seventies, one of the holy grails of cryptography has been the invention of encryption algorithms that allow computation to be performed directly on ciphertexts without prior decryption. While heavy cryptographic hammers like fully-homomorphic encryption and oblivious RAMs can address (versions of) the aforementioned problem with ideal security guarantees, encrypted databases provide a more practical alternative. An encrypted database achieves considerable efficiency by releasing some formally-defined and superficially harmless information, known as leakage. However, it turns out such leakage can lead to complete value reconstruction of the database! In this talk I will review some of the basic techniques to perform database reconstruction from range search leakage and then I will present my recent work on query distribution-agnostic attacks on encrypted databases. I will conclude with some suggestions about how to argue formally about the security of encrypted databases.

This talk is based on joint works with Evgenios Kornaropoulos (UC Berkeley), Alexandros Psomas (Purdue University), Dawn Song (UC Berkeley) and Roberto Tamassia (Brown University).

Abstract:

Since the notion of Multiparty Computation (MPC) was proposed three decades ago, a lot of research and effort has been done to improve the efficiency of MPC protocols. However, the inefficiency of the current state of the art is still the major barrier which prevents MPC from being used more broadly.

In this talk, we focus on unconditional (or information-theoretical) MPC. A key feature of unconditional MPC is that we do not need any expensive cryptographic primitive (such as public-key encryption or oblivious transfer) and the protocol is secure unconditionally. Comparing with the protocols in the computational setting (i.e., with security relying on cryptographic assumptions), one major benefit is that protocols usually do not require complicated and time-consuming local computations. In particular, local computations are often just a series of linear operations. As a result, the most efficient MPC protocols are in the unconditional MPC paradigm. And the main criterion for the efficiency of unconditional MPC protocols is the amount of communication between every pair of parties.

We will start with a short review of the notion of MPC. Then we will introduce the Damgard and Nielsen protocol (DN protocol), the best-known communication-efficient unconditional MPC protocol in the semi-honest setting. Next, we will show how previous works achieve malicious security using the DN protocol. Finally, we will introduce our techniques, which allows us to achieve malicious security with the same concrete efficiency as the semi-honest DN protocol.

Abstract:

We present the first protocols for private information retrieval that allow fast (sublinear-time) database lookups without increasing the server-side storage requirements. To achieve these efficiency goals, our protocols work in an offline/online model. In an offline phase, which takes place before the client has decided which database bit it wants to read, the client fetches a short string from the servers. In a subsequent online phase, the client can privately retrieve its desired bit of the database by making a second query to the servers. By pushing the bulk of the server-side computation into the offline phase (which is independent of the client's query), our protocols allow the online phase to complete very quickly—in time sublinear in the size of the database. Finally, we prove that our protocols are optimal in terms of the trade-off they achieve between communication and running time. Joint work with Henry Corrigan-Gibbs.

Abstract: This talk will focus on differentially private synthetic data---a privatized version of the dataset that consists of fake data records and that approximates the real dataset on important statistical properties of interest. I will present our recent results on private synthetic data that leverage practical optimization heuristics to circumvent the computational bottleneck in existing work. Our techniques are motivated by a modular, game-theoretic framework, which can flexibly work with methods such as integer program solvers and deep generative models.

Abstract: We present a new general framework for constructing compact and adaptively secure attribute-based encryption (ABE) schemes from k-Lin in asymmetric bilinear pairing groups. Previously, the only construction [Kowalczyk and Wee, Eurocrypt '19] that simultaneously achieves compactness and adaptive security from static assumptions supports policies represented by Boolean formulae. Our framework enables supporting more expressive policies represented by arithmetic branching programs.

Our framework extends to ABE for policies represented by uniform models of computation such as Turing machines. Such policies enjoy the feature of being applicable to attributes of arbitrary lengths. We obtain the first compact adaptively secure ABE for deterministic and non-deterministic finite automata (DFA and NFA) from k-Lin, previously unknown from any static assumptions. Beyond finite automata, we obtain the first ABE for large classes of uniform computation, captured by deterministic and non-deterministic logspace Turing machines (the complexity classes L and NL) based on k-Lin. Our ABE scheme has compact secret keys of size linear in the description size of the Turing machine M. The ciphertext size grows linearly in the input length, but also linearly in the time complexity, and exponentially in the space complexity. Irrespective of compactness, we stress that our scheme is the first that supports large classes of Turing machines based solely on standard assumptions. In comparison, previous ABE for general Turing machines all rely on strong primitives related to indistinguishability obfuscation.

This talk is based on two recent works [Lin and Luo, Eurocrypt '20; Lin and Luo, Asiacrypt '21], with a focus on the former. I will present the framework for compact ABE, instantiated for ABP and DFA. Time permitting, I will discuss the ideas for succinct ABE.

Abstract:

With the rapid growth of the internet, many optimization problems in recent big-data applications often exceed the capacity of traditional computing platforms. My goal is to lay the theoretical foundation of big-data analysis by designing algorithms on modern computing platforms to solve large-scale optimization problems under various resource constraints, with an emphasis on time efficiency and simplicity.

The datasets in most applications are originated from or can be modeled as graphs, and this thesis focuses on solving graph problems. Usually, the graph is too large to store in a single memory, resulting in a very slow random access to the data. To capture this, one important computational model is streaming: we want to solve the problem using very limited space while minimizing the number of passes of scanning the entire data. Another framework to solve large-scale graph problems is to store the graph in a distributed manner and use the computational power of the distributed processors, which is called parallel computation. These two areas continue to be very active partly due to the success of the cloud computing platforms from Google, Microsoft, Amazon, and so on.

We obtain faster and simpler algorithms in both streaming and parallel computation models for graph problems such as connectivity, bipartite matching, and maxflow. In particular, we solved three open problems. The first one is that given a graph of n vertices and m edges, does there exist a semi-streaming (using nearly n space) algorithm that computes the maximum weight bipartite matching in o(n) passes? We answer this question positively by giving an algorithm that runs in square root m passes. The algorithm relies on space-efficient versions of the interior point method, Laplacian system solving, and the generalized isolation lemma, all of which are not known before this work. The second problem is to break the square root n depth for computing maxflow in parallel. The square root n depth barrier exists for digraph reachability, matching, and other fundamental problems. Our solution is approximate, but only runs in n^{1/3} depth for unit-capacity graphs. The third problem is to give an o(log n) depth parallel algorithm for connectivity on a PRAM (parallel random access machine). All previous PRAM algorithms use at least log n depth. Our algorithm runs in O(log d + loglog n) depth where d is the diameter, which breaks the log n barrier for small-diameter graphs. Before our work, there is only an MPC (massively parallel computing, a model that is much stronger than PRAM) algorithm and it is very complicated. Our algorithm is simpler and more practical, albeit in a much weaker model.

We also design several extremely simple algorithms for connectivity and bipartite matching. For the connectivity problem, we give several elegant algorithms (about five lines of code) in the MPC model, with a state-of-the-art running time of O(log n) for undirected graph. For the bipartite matching problem, we give a very simple streaming algorithm based on auctions that approximates the maximum matching to (1-eps) in O(eps^{-2}) passes and O(n) space, reducing the number of passes and space from previous work, which also has applications on faster MPC algorithms for matching.

Abstract:

Answering multiple counting queries is one of the best-studied problems in differential privacy. Its goal is to output an approximation of the average $\frac{1}{n}\sum_{i=1}^n \overrightarrow{x}^{(i)}$ of vectors $ \overrightarrow{x}^{(i)}\in [0,1]^k$ , while preserving the privacy with respect to any $ \overrightarrow{x}^{(i)}$ . We present an $(\epsilon,\delta)$-private mechanism with optimal $\ell_{\infty}$ error for most values of $\delta$. This result settles the conjecture of Steinke and Ullman [2020] for the these values of $\delta$. Our algorithm adds independent noise of bounded magnitude to each of the k coordinates, while prior solutions relied on unbounded noise such as the Laplace and Gaussian mechanisms.

Joint work with Gil Kur

Abstract:

In this work, we study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism that modifies the training inputs before feeding them to a normal learner. We formalize the notion of instance encoding and its privacy by providing attack models. We first prove impossibility results for achieving these notions of privacy. Next, we demonstrate an attack on InstaHide, a recent proposal by Huang, Song, Li and Arora [ICML'20] that aims to use instance encoding for privacy.

Lockable Signatures for Blockchains: Scriptless Scripts for All Signatures

Sri Aravinda Krishnan Thyagarajan

Abstract:

Payment Channel Networks (PCNs) have given a huge boost to the scalability of blockchain-based cryptocurrencies: Beyond improving the transaction rate, PCNs enabled cheap cross-currency payments and atomic swaps. However, current PCNs proposals either heavily rely on special scripting features of the underlying blockchain (e.g. Hash Time Lock Contracts) or are tailored to a handful of digital signature schemes, such as Schnorr or ECDSA signatures. This leaves us in an unsatisfactory situation where many currencies that are being actively developed and use different signature schemes cannot enjoy the benefits of a PCN.

In this work, we investigate whether we can construct PCNs assuming the minimal ability of a blockchain to verify a digital signature, for any signature scheme. In answering this question in the affirmative, we introduce the notion of lockable signatures, which constitutes the cornerstone of our PCN protocols. Our approach is generic and the PCN protocol is compatible with any digital signature scheme, thus inheriting all favorable properties of the underlying scheme that are not offered by Schnorr/ECDSA (e.g.\ aggregatable signatures or post-quantum security).

While the usage of generic cryptographic machinery makes our generic protocol impractical, we view it as an important feasibility result as it may serve as the basis for constructing optimized protocols for specific signature schemes. To substantiate this claim, we design a highly efficient PCN protocol for the special case of Boneh-Lynn-Shacham (BLS) signatures. BLS signatures enjoy many unique features that make it a viable candidate for a blockchain, e.g. short, unique, and aggregatable signatures. Yet, prior to our work, no PCN was known to be compatible with it (without requiring an advanced scripting language). The cost of our PCN is dominated by a handful of calls to the BLS algorithms. Our concrete evaluation of these basic operations shows that users with commodity hardware can process payments with minimal overhead.