Workshop on Leakage, Tampering and Viruses
2-6 June 2013, Warsaw, Poland





     ABSTRACTS


Yevgeniy Dodis Key Derivation Without Entropy Waste 

We revisit the classical question of converting an imperfect source X
of min-entropy k into a usable m-bit cryptographic key for some
underlying application P. If P has security delta (against some class
of attackers) with a uniformly random m-bit key, we seek to design a
key derivation function (KDF) h that allows us to use R=h(X) as the
key for P and results in comparable security delta' close to delta.
Seeded randomness extractors provide a generic way to solve this
problem provided that k > m + 2*log(1/delta), and this lower bound on
k (called "RT-bound") is known to be tight in general. Unfortunately,
in many situation the "waste" of 2*log(1/delta) bits of entropy is
significant, motivating the question of designing KDFs with less waste
for important special classes of sources X or applications P.I will discuss 
several positive and negative results in this regard.

The most surprising of them will be a positive result for all
unpredictability applications P, yielding a provably secure KDF with
entropy "waste" only loglog(1/delta) - an expenential improvement over
the RT-bound.


Abhishek Jain What Information is Leaked Under Concurrent Composition?

Traditional  secrity notions for cryptographic protocols only promise
security if a  single protocol is executed in a "closed
environment." Today's world, however, is driven by networks -- the
most important example being the Internet. In such an environment,
several protocol instances may be executed concurrently, and an
adversary may be able to perform coordinated attacks by corrupting
parties across various protocol sessions.

Over the last decade, a tremendous amount of effort has been made to
obtain protocols that remain secure even under concurrent execution.
Nevertheless, designing protocols that guarantee strong and meaningful
security, without any trust assumptions, remains a challenging
problem. In this talk, I will describe a new approach (inspired by the
regime of leakage-resilient cryptography) to precisely quantify the
amount of information that an adversary can learn by performing
concurrent attacks. In particular, I will show (1) how positive
results in leakage-resilient cryptography can be used to lower bound
the amount of information loss in the concurrent setting, and then (2)
how classic set-covering problem can be used to guarantee that in a
concurrent setting, standard security of most of the protocol sessions
remains intact, without any trust assumptions.

Joint work with Vipul Goyal and Divya Gupta, to appear at CRYPTO'13


Yu Yu DPA attacks on small embedded devices and the applications of unpredictability pseudo-entropy

In the first half of this talk, we report on successful side-channel
attacks against several (old but still deployed) implementations of
the COMP128-1 algorithm. Such attacks are able to recover
cryptographic keys with limited time and data, by measuring the power
consumption of the devices manipulating them, hence allowing cards
cloning and communications eavesdropping. This study allows us to put
forward the long term issues raised by the deployment of cryptographic
implementations. It provides a motivation for improving the physical
security of small embedded devices early in their development. We also
use it to argue that public standards for cryptographic algorithms and
transparent physical security evaluation methodologies are important
tools for this purpose. This is joint work with Yuanyuan Zhou,
Francois-Xavier Standaert, Jean-Jacques Quisquater. In the other half
of the talk, we show an application of unpredictablity pseudo-entropy
(a useful notion in leakage-resilient cryptography) to the problem of
constructing pseudo-random generators from regular one-way functions.
For any known-regular one-way function (on $n$-bit inputs) that is
known to be $\eps$-hard to invert, we give a neat (and tighter) proof
for the folklore construction of pseudorandom generator of seed length
$\Theta(n)$ by making a single call to the underlying one-way
function. For any unknown-regular one-way function with known
$\eps$-hardness, we give a new construction with seed length
$\Theta(n)$ and $O(n/\log{(1/\eps)})$ calls. Here the number of calls
is also optimal by matching the lower bounds of Holenstein and Sinha
[FOCS 2012].


Yevgeniy Vahlis EyeDecrypt --- Hiding Information in Plain Sight

We introduce EyeDecrypt, a novel content-privacy technology that
allows only legitimate users to visualize data being displayed on
public-view rendering devices, such as electronic displays or printed
surfaces.  The data can be viewed on a closely-held personal device,
such as a pair of smart glasses with a camera and heads-up display, or
a smartphone. The decrypted data are displayed as an image overlay on
the personal device---a form of augmented reality.

The technology consists of two main components: a visualizable
encryption scheme and a dataglyphs-based visual encoding scheme for
the ciphertexts generated by the encryption scheme. We describe all
aspects of EyeDecrypt, from security definitions, constructions and
formal analyses, to implementation details of a prototype developed on
a smartphone.


Hoeteck Wee Public Key Encryption Against Related Key Attacks

We present efficient public-key encryption schemes resilient against
linear related key attacks (RKA) under standard assumptions and in the
standard model. Specifically, we obtain encryption schemes based on
hardness of factoring, BDDH and LWE that remain secure even against an
adversary that may query the decryption oracle on linear shifts of the
actual secret key. Moreover, the ciphertext overhead is only an
additive constant number of group elements.


Moni Naor Zero-Knowledge and Secure Computation of Physical Properties 

Is it possible to prove that two DNA-fingerprints match, or that they
do not match, without revealing any further information about the
fingerprints? Is it possible to prove that two warheads have the same
design without revealing the design itself?

Zero-knowledge is a familiar and well-developed concept in the digital
domain. As we know, under reasonable cryptographic assumptions, any
statement that can be proved can also be proved with zero-knowledge.
But zero-knowledge is not as well-understood in the context of
physical problems: proving that a set of objects has a particular
physical property.

In this talk I will describe recent work regarding Zero-Knowledge and
Secure Computation of Physical Properties. In particular I will refer
to the above mentioned problems and the work of Glaser, Barak, and
Goldston for arms treaty verification.

Joint work with Ben Fisch and Daniel Freund


Tomasz Kazana One-Time Programs with Limited Memory

We reinvestigate a notion of one-time programs introduced in the
CRYPTO'08 paper by Goldwasser et al. A one-time program is a device
containing a program C, with the property that the program C can be
executed on at most one input. Goldwasser et al. show how to implement
one-time programs on devices equipped with special hardware-gadgets
called one-time memory tokens. We provide an alternative construction
that does not rely on the hardware gadgets. Instead, it is based on
the following assumptions: (1) the total amount of data that can leak
from the device is bounded, and (2) the total memory on the device
(available both to the honest user and to the attacker) is also
restricted, which is essentially the model used recently by
Dziembowski et al. (TCC 2011, CRYPTO 2011) to construct one-time
computable pseudorandom functions and key-evolution schemes.

joint work with Konrad Durnoga, Stefan Dziembowski  and Michal Zajac 


Edoardo Persichetti Code-based public-key encryption resistant to key leakage

Side-channel attacks are a major issue for implementation of secure
cryptographic schemes. Among these, key-leakage attacks describe a
scenario in which an adversary is allowed to learn arbitrary informa-
tion about the private key, the only constraint being the number of
bits learned. In this work, we study key-leakage resilience according
to the model presented by Akavia, Goldwasser and Vaikuntanathan at TCC
’09. As our main contribution, we present a code-based hash proof
system; we obtain our construction by relaxing some of the
requirements from the original definition of Cramer and Shoup. We then
propose a leakage- resilient public-key encryption scheme that makes
use of this hash proof system. To do so, we adapt a framework featured
in a previous work by Alwen et al. regarding identity-based encryption
(EUROCRYPT ’10). Our construction features error-correcting codes as a
technical tool, and, as opposed to previous work, does not require the
use of a randomness extractor.


Stefan Mangard Bounding the Side-Channel Leakage of Security Tokens

The talk first provides an overview of the architecture of a typical
security token, such as a smart card. Based on a discussion of various
attacks on the components of the token, we elaborate on the question
to what degree it is possible to bound side-channel leakage in
practice. We discuss this question for two scenarios. The first
scenario is authentication and with a fixed key. The second scenario
is communication based on a session key. We finally present the
cryptographic part of the CIPURSE protocol which has been developed
with the goal of minimizing the side-channel leakage and which is used
as standard by the industry consortium OSPT alliance.


Vladimir Kolesnikov MAC Precomputation with Applications to Secure Memory

We present ShMAC (Shallow MAC), a fixed input length message
authentication code that performs most of the computation prior to
the availability of the message. Specifically, ShMAC's
message-dependent computation is much faster and smaller in hardware
than the evaluation of a pseudorandom permutation (PRP), and can be
implemented by a small shallow circuit, while its precomputation
consists of one PRP evaluation. A main building block for ShMAC is
the notion of strong differential uniformity (SDU), which we
introduce, and which may be of independent interest. We show an
efficient SDU construction built from previously considered
differentially uniform functions. Our motivating application is a
system architecture where a hardware-secured processor uses memory
controlled by an adversary.

joint work with Juan A. Garay and Rae McLellan


Leo Reyzin Computational Fuzzy Extractors 

Fuzzy extractors derive strong keys from noisy sources. Their
security is defined information-theoretically, which limits the length
of the derived key, sometimes making it too short to be useful. We ask
whether it is possible to obtain longer keys by considering
computational security, and show the following.
  • Negative Result: Noise tolerance in fuzzy extractors is usually
    achieved using an information-reconciliation component called "secure
    sketch." The security of this component, which directly affects the
    length of the resulting key, is subject to lower bounds from coding
    theory. We show that, even when defined computationally, secure
    sketches are still subject to lower bounds from coding theory.
    Specifically, we consider two computational relaxations of the
    information-theoretic security requirement of secure sketches, using
    conditional HILL entropy and unpredictability entropy. For both cases
    we show that computational secure sketches cannot outperform the best
    information-theoretic secure sketches in the case of high-entropy
    Hamming metric sources.

  • Positive Result: We show that the negative result can be overcome by
    analyzing computational fuzzy extractors directly. Namely, we show
    how to build a computational fuzzy extractor whose output key length
    equals the entropy of the source (this is impossible in the
    information-theoretic setting). Our construction is based on the
    hardness of the Learning with Errors (LWE) problem, and is secure when
    the noisy source is uniform or symbol-fixing (that is, each dimension
    is either uniform or fixed). As part of the security proof, we show a
    result of independent interest, namely that the decision version of
    LWE is secure when a small number of dimensions has no error.
joint work with Benjamin Fuller and Xianrui Meng


Elette Boyle Secure Computation Against Adaptive Auxiliary Information

We study the problem of secure multiparty computation (MPC) in a
setting where a cheating polynomial-time adversary can corrupt an
arbitrary subset of parties and, in addition, learn arbitrary
auxiliary information on the entire states of all honest parties
(including their inputs and random coins), in an adaptive manner,
throughout the protocol execution. We formalize a definition of
multiparty computation secure against adaptive auxiliary information
(AI-MPC), that intuitively guarantees that such an adversary learns no
more than the function output and the adaptive auxiliary information.

In particular, if the auxiliary information contains only partial,
``noisy,'' or computationally uninvertible information on secret
inputs, then only such information should be revealed. Our
definition is a natural generalization of the standard security notion
for MPC, where the adversary is restricted to (static) auxiliary
information on the inputs of the honest parties prior to the protocol
execution.

We construct a universally composable AI-MPC protocol that realizes
any (efficiently computable) functionality against malicious
adversaries in the common reference string (CRS) model, based on the
linear assumption over bilinear groups and the n-th residuosity
assumption. Our protocol tolerates an arbitrary number of corruptions,
and applies to both the two-party setting as well as the multiparty
setting. Our result has interesting applications to the regime of
leakage-resilient cryptography; indeed, our result is already used as
an essential tool for constructing leakage-resilient MPC protocols in
the leak-free preprocessing model [Boyle et. al. STOC'12].

joint work with Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, 
and Amit Sahai. 


Alon Rosen Pseudorandom Functions and Lattices

We give direct constructions of pseudorandom function (PRF) families
based on conjectured hard lattice problems and learning problems. Our
constructions are asymptotically efficient and highly parallelizable
in a practical sense, i.e., they can be computed by simple, relatively
small low-depth arithmetic or boolean circuits (e.g., in NC$^{1}$ or 
even TC$^{0}$). In addition, they are the first low-depth
PRFs that have no known attack by efficient quantum algorithms.
Central to our results is a new ``derandomization'' technique for the
learning with errors (LWE) problem which, in effect, generates the
error terms deterministically.

Joint work with Abhishek Banerjee and Chris Peikert.


Maciej Obremski Non-Malleable Codes and Two-Source Extractors 

We construct an efficient information-theoretically non-malleable
code in the split-state model for one-bit messages.  Non-malleable
codes were introduced recently by Dziembowski, Pietrzak and Wichs (ICS
2010), as a general tool for storing messages securely on hardware
that can be subject to tampering attacks.   Informally,  a code $(Enc
: \cM \rightarrow \cL \times \cR, Dec : \cL \times \cR \rightarrow
\cM)$ is non-malleable in the split-state model  if any
adversary, by manipulating independently $L$ and $R$ (where
$(L,R)$ is an encoding of some message $M$), cannot obtain an encoding
of a message $M'$ that is not equal to $M$ but is ``related'' $M$ in
some way.  Until now it was unknown how to construct an
information-theoretically secure code with such a property, even for
$\cM = \bin$.  Our construction solves this problem.  Additionally, it
is leakage-resilient, and the amount of leakage that we can tolerate
can be an arbitrary  fraction $\xi < 1/4$ of the length of the
codeword.  Our code is based on the inner-product two-source
extractor, but in general it can be instantiated by any two-source
extractor that has large output and has the property of being flexible
which is a new notion that we define.

We also show that the non-malleable codes for one-bit messages have an
equivalent, perhaps simpler characterization, namely such codes can be
defined as follows:  if $M$ is chosen uniformly from $\bin$ then the
probability (in the experiment described above) that the output
message $M'$ is not equal to $M$ can be at most $1/2 + \epsilon$.

joint work with Stefan Dziembowki and Tomasz Kazana


Stefan Dziembowski Proofs of Space and a Greener Bitcoin

Proofs of work (PoW) have been suggested by Dwork and Naor (Crypto’92)
as protection to a shared resource. The basic idea is to ask the
service requestor to dedicate some non-trivial amount of computational
work to every request. The original applications included prevention
of spam and protection against denial of service attacks, more
recently PoW have been used to prevent double spending in the Bitcoin
digital currency system.

In this work we put forward the concept of proofs of space (PoS),
where a service requestor must dedicate a significant amount of disk
space as opposed to computation. We give constructions of PoS schemes
in the random oracle model.

We propose PoS as an alternative to PoW for schemes as Bitcoin.
Currently, to avoid double spending, the userbase of Bitcoin must
dedicate more computational power than a potential adversary could
spend in every (less than one hour) time frame. This is expensive and
thus hard to incentivise.

In contrast, PoS only require users to dedicate disk space that they
don’t have use for at the moment. This space must be initialized once
with a sorted list of random hash-values (which can be locally
sampled), and participating in the proof only requires a lookup in
this sorted list.

joint work with Sebastian Faust and Krzysztof Pietrzak


François-Xavier Standaert A survey of physical assumptions in leakage-resilience

Starting with concrete examples of leakage functions and DPA attack, I
will survey a number of assumptions that have been used to prove the
leakage-resilience of cryptographic primitives, and discuss their
practical relevance. In particular, I will focus on the
informativeness of the leakage function, its computational complexity
and the assumption of independent leakage. I will then argue that some
of these assumptions are difficult to fulfill by hardware engineers,
and introduce an alternative assumption of simulatable leakage for
block ciphers, that is empirically verifiable and allows proving the
leakage-resilience of efficient symmetric cryptographic constructions.



Olivier Pereira Leakage-resilient cryptography under empirically verifiable assumptions

Leakage-resilient cryptography aims at formally proving the security
of cryptographic implementations against large classes of side-channel
adversaries. One important challenge for such an approach to be
relevant is to adequately connect the formal models used in the proofs
with the practice of side-channel attacks. It raises the fundamental
problem of finding reasonable restrictions of the leakage functions
that can be empirically verified by evaluation laboratories. In this
paper, we introduce a new, realistic and empirically verifiable
assumption of simulatable leakage, under which security proofs in the
standard model can be obtained. We finally illustrate our claims by
analyzing the physical security of an efficient pseudorandom generator
(for which security could only be proven under a random oracle based
assumption so far). These positive results come at the cost of
(algorithm-level) specialization, as our new assumption is
specifically defined for block ciphers. Nevertheless, since block
ciphers are the main building block of many leakage-resilient
cryptographic primitives, our results also open the way towards more
realistic constructions and proofs for other pseudorandom objects.

 joint work with François-Xavier Standaert and Yu Yu


Daniele Venturi On the connection between leakage tolerance and adaptive security

We revisit the context of leakage-tolerant interactive protocols as
defined by Bitanski, Canetti and Halevi (TCC 2012). Our contributions
can be summarized as follows:
    1. For the purpose of secure message transmission, any encryption
      protocol with message space $\cM$ and secret key space $\cSK$
      tolerating poly-logarithmic leakage on the secret state of the
      receiver must satisfy $|\cSK| \ge (1-\epsilon)|\cM|$, for every $0 <
      \epsilon \le 1$, and if $|\cSK| = |\cM|$, then the scheme must use a
      fresh key pair to encrypt each message.

    2. More generally, we show that any $n$ party protocol tolerates
      leakage of $\approx\poly(\log\spar)$ bits from one party at the end of
      the protocol execution, if and only if the protocol has passive
      adaptive security against an adaptive corruption of one party at the
      end of the protocol execution. This shows that as soon as a little
      leakage is tolerated, one needs full adaptive security.

    3. In case more than one party can be corrupted, we get that leakage
      tolerance is equivalent to a weaker form of adaptivity, which we call
      semi-adaptivity. Roughly, a protocol has semi-adaptive
      security if there exist a simulator which can simulate the internal
      state of corrupted parties, however, such a state is not required to
      be indistinguishable from a real state, only that it would have lead
      to the simulated communication.
All our results can be based on the solely assumption that
collision-resistant function ensembles exist.

joint work with Jesper Buus Nielsen and Angela Zottarel


Joel Alwen Learning with Rounding, Revisited New Reduction, Properties and Applications

The learning with rounding (LWR) problem, introduced by Banerjee,
Peikert and Rosen at EUROCRYPT ’12, is a variant of learning with
errors (LWE), where one replaces random errors with deterministic
rounding. The LWR problem was shown to be as hard as LWE for a setting
of parameters where the modulus and modulus-to-error ratio are
super-polynomial. In this work we resolve the main open problem and
give a new reduction that works for a larger range of parameters,
allowing for a polynomial modulus and modulus-to-error ratio. In
particular, a smaller modulus gives us greater efficiency, and a smaller
modulus-to-error ratio gives us greater security, which now follows
from the worst-case hardness of GapSVP with polynomial (rather than
super- polynomial) approximation factors.

As a tool in the reduction, we show that there is a “lossy mode” for
the LWR problem, in which LWR samples only reveal partial information
about the secret. This property gives us several interesting new
applications, including a proof that LWR remains secure with weakly
random secrets of sufficient min-entropy, and very simple constructions
of deter- ministic encryption, lossy trapdoor functions and reusable
extractors. Our approach is inspired by a technique of Goldwasser et
al. from ICS ’10, which implicitly showed the existence of a “lossy
mode” for LWE. By refining this technique, we also improve on the
parameters of that work to only requiring a polynomial (instead of
super-polynomial) modulus and modulus-to-error ratio.

join work with Stephan Krenn, Krzysztof Pietrzak, and Daniel Wichs


Antonio Faonio How to Authenticate From a Fully Compromised System

We propose an efficient identification scheme in Bounded Retrieval
Model based on standard cryptographic assumptions (RSA and
Factoring). We achieve this by first constructing an honest-verifier
(computational) zero-knowledge (HVZK) proof of storage (PoS) which,
roughly speaking, guarantees that one can efficiently verify the
integrity of remotely stored data without learning any information
about the data. We then provide a general methodology (i.e., a
compiler) that transforms any HVZK PoS into an identification scheme
in the BRM. Furthermore, we provide a prototype implementation of our
scheme and show that it is indeed efficient and deployable. This work
was submitted to CCS 2013.

Joint work with Giuseppe Ateniese, Seny Kamara and Jonathan Katz.


Daniel Masny Man-in-the-Middle Secure Authentication Schemes from Weak PRFs

The talk will be about constructing a 3-round symmetric-key
authentication protocol based upon weak-PRFs that is secure against
man-in-the-middle attacks. Almost the same construction can be used for
the more general class of randomized weak-PRFs including functions based
upon the classical LPN problem as well as its variants for example
Toeplitz-LPN and Ring-LPN. The construction is very simple and
efficient, such that it is very competitive compared to actively secure
schemes based upon similar assumptions.


Maciej Skórski Some problems in computational entropy

In the talk we will give a short overview of computational
generalizations of the notions of entropy. Focusing on the most
commonly used definitions, we will address two special problems in the
computational entropy:

  1. the so called "chain rule" - an estimate on the amount of entropy 
    left after some leakage.  Intuitively, small leakage shouldn't affect
    the pseudorandomness much.  
    Can we formalize this intuition for
    computational entropy?

  2. some surprisingly deep connections between the computational 
    entropy and geometry. We will see, that the notion of the computational
    entropy is much more geometrical that it might look 
    like. What
    does it have to do with derandomization?

Konrad Durnoga On non-malleable extractors and computing discrete logarithms in bulk

We give an unconditional construction of a non-malleable extractor
improving the solution from the recent paper Privacy Amplification and
Non-Malleable Extractors via Character Sums by Dodis et al. There, the
authors provide the first explicit example of a non-malleable
extractor -- a cryptographic primitive that significantly strengthens
the notion of a classical randomness extractor. In order to make the
extractor robust, so that it runs in polynomial time and outputs a
linear number of bits, they rely on a certain conjecture on the least
prime in a residue class. In the talk we present a modification of
their construction that allows to remove that dependency. As an
auxiliary result, which can be of independent interest, we show an
efficiently computable bijection between any order M subgroup of the
multiplicative group of a finite field and a set of integers modulo M,
under an assumption that M is a smooth number. Also, we provide a
version of Shanks' baby-step giant-step method for solving multiple
instances of the discrete logarithm problem in the multiplicative
group of a prime field. It performs better than the generic algorithm
when run on a machine without constant-time access to each memory
cell, e.g., on a classical Turing machine.

Joint work with Bartosz Źrałek.


Krzysztof Pietrzak How to fake auxiliary input and applications

We show that for any joint distribution (X,A) and any family F of
distinguishers, e.g. polynomial size circuits, there exists an
efficient (deterministic) simulator h such that F cannot distinguish
(X,A) from (X,h(X)), i.e. for all f in F we have |E[f(X,A)]-E(f(X,h(X))]|<eps.

We'll discuss several applications including leakage-resilience and
zero-knowledge.


Sebastian Faust  Efficient Leakage Resilient Symmetric Cryptography

Leakage resilient cryptography attempts to incorporate side-channel
leakage into the black-box security model and designs cryptographic
schemes that are provably secure within it. Informally, a scheme is
leakage-resilient if it remains secure even if an adversary learns a 
bounded amount of arbitrary information about the schemes internal state.
Unfortunately, most leakage resilient schemes are unnecessarily
complicated in order to achieve strong provable security guarantees.
As advocated by Yu et al. [CCS'10], this mostly is an artefact of the
security proof and in practice much simpler construction may already
suffice to protect against realistic side-channel attacks. In this paper, 
we show that indeed for simpler constructions leakage-resilience 
can be obtained when we aim for relaxed security notions where 
the leakage-functions and/or the inputs to the primitive are chosen 
non-adaptively. For example, we show that a three round Feistel 
network instantiated with a leakage resilient PRF yields a leakage 
resilient PRP if the inputs are chosen non-adaptively (This complements
 the result of Dodis and Pietrzak [CRYPTO'10] who show that
if a adaptive queries are allowed, a superlogarithmic number of rounds
is necessary.)

We also show that a minor variation of the classical GGM construction
gives a leakage resilient PRF if both, the leakage-function and the
inputs, are chosen non-adaptively.

Joint work with Krzysztof Pietrzak and Joachim Schipper


Rafael Pass On the (Im)Possibility of Tamper-Resilient Cryptography: Using Fourier Analysis in Computer Viruses

We initiate a study of the security of cryptographic primitives in the
presence of efficient tampering attacks to the randomness of honest
parties. More precisely, we consider p-tampering attackers that may
efficiently tamper with each bit of the honest parties' random tape
with probability p but have to do so in an ``online'' fashion. Our
main result is a strong negative result: We show that any secure
encryption scheme, bit commitment scheme, or zero-knowledge protocol
can be ``broken'' with probability $p$ by a $p$-tampering attacker.
The core of this result is a new Fourier analytic technique for
biasing the output of bounded-value functions, which may be of
independent interest.

We also show that this result cannot be extended to primitives such as
signature schemes and identification protocols: assuming the existence
of one-way functions, such primitives can be made resilient to
(1/\poly(n))-tampering attacks where $n$ is the security parameter.

joint work with Per Austrin, Kai-Min Chung, Mohammad Mahmoody and
Karn Seth


Adi Akavia Distributed Public Key Schemes Secure against Continual Leakage

We study distributed public key schemes secure against continual
memory leakage. In the distributed scheme the secret key is shared
among two computing devices communicating over a public channel, and
the decryption operation is computed by a simple 2-party protocol
between the devices. Similarly, the secret key shares is periodically
refreshed by a simple 2-party protocol executed in discrete time
periods throughout the lifetime of the system. The leakage adversary
can choose pairs, one per device, of polynomial time computable length
shrinking (or entropy shrinking) functions, and receive the value of
the respective function on the internal state of the respective device
(namely, on its secret share, internal randomness, and results of
intermediate computation).

We present distributed public key encryption (DPKE) and distributed
identity based encryption (DIBE) schemes that are secure against
continual memory leakage, under the Bilinear Decisional Diffie-Hellman
and 2-Linear assumptions. Our schemes have the following properties:
    1. The DPKE and DIBE tolerate leakage at all times, including during
      refresh, where the leakage rate is (1/2-o(1),1) of the respective
      devices during refresh, and ((1-o(1)),1) at all other times (post key
      generation).

    2. The DIBE tolerates leakage from both the master secret key and 
      the identity based secret keys.

    3. The DPKE scheme is CCA2-secure against continual memory leakage.
Joint work with Shafi Goldswasser and Carmit Hazay.