Workshop on Leakage, Tampering and Viruses 2-6 June 2013, Warsaw, Poland      ABSTRACTSYevgeniy Dodis Key Derivation Without Entropy Waste We revisit the classical question of converting an imperfect source Xof min-entropy k into a usable m-bit cryptographic key for someunderlying application P. If P has security delta (against some classof attackers) with a uniformly random m-bit key, we seek to design akey derivation function (KDF) h that allows us to use R=h(X) as thekey for P and results in comparable security delta' close to delta.Seeded randomness extractors provide a generic way to solve thisproblem provided that k > m + 2*log(1/delta), and this lower bound onk (called "RT-bound") is known to be tight in general. Unfortunately,in many situation the "waste" of 2*log(1/delta) bits of entropy issignificant, motivating the question of designing KDFs with less wastefor important special classes of sources X or applications P.I will discuss several positive and negative results in this regard.The most surprising of them will be a positive result for allunpredictability applications P, yielding a provably secure KDF withentropy "waste" only loglog(1/delta) - an expenential improvement overthe RT-bound.Abhishek Jain What Information is Leaked Under Concurrent Composition?Traditional  secrity notions for cryptographic protocols only promisesecurity if a  single protocol is executed in a "closedenvironment." Today's world, however, is driven by networks -- themost important example being the Internet. In such an environment,several protocol instances may be executed concurrently, and anadversary may be able to perform coordinated attacks by corruptingparties across various protocol sessions.Over the last decade, a tremendous amount of effort has been made toobtain protocols that remain secure even under concurrent execution.Nevertheless, designing protocols that guarantee strong and meaningfulsecurity, without any trust assumptions, remains a challengingproblem. In this talk, I will describe a new approach (inspired by theregime of leakage-resilient cryptography) to precisely quantify theamount of information that an adversary can learn by performingconcurrent attacks. In particular, I will show (1) how positiveresults in leakage-resilient cryptography can be used to lower boundthe amount of information loss in the concurrent setting, and then (2)how classic set-covering problem can be used to guarantee that in aconcurrent setting, standard security of most of the protocol sessionsremains intact, without any trust assumptions.Joint work with Vipul Goyal and Divya Gupta, to appear at CRYPTO'13Yu Yu DPA attacks on small embedded devices and the applications of unpredictability pseudo-entropyIn the first half of this talk, we report on successful side-channelattacks against several (old but still deployed) implementations ofthe COMP128-1 algorithm. Such attacks are able to recovercryptographic keys with limited time and data, by measuring the powerconsumption of the devices manipulating them, hence allowing cardscloning and communications eavesdropping. This study allows us to putforward the long term issues raised by the deployment of cryptographicimplementations. It provides a motivation for improving the physicalsecurity of small embedded devices early in their development. We alsouse it to argue that public standards for cryptographic algorithms andtransparent physical security evaluation methodologies are importanttools for this purpose. This is joint work with Yuanyuan Zhou,Francois-Xavier Standaert, Jean-Jacques Quisquater. In the other halfof the talk, we show an application of unpredictablity pseudo-entropy(a useful notion in leakage-resilient cryptography) to the problem ofconstructing pseudo-random generators from regular one-way functions.For any known-regular one-way function (on $n$-bit inputs) that isknown to be $\eps$-hard to invert, we give a neat (and tighter) prooffor the folklore construction of pseudorandom generator of seed length$\Theta(n)$ by making a single call to the underlying one-wayfunction. For any unknown-regular one-way function with known$\eps$-hardness, we give a new construction with seed length$\Theta(n)$ and $O(n/\log{(1/\eps)})$ calls. Here the number of callsis also optimal by matching the lower bounds of Holenstein and Sinha[FOCS 2012].Yevgeniy Vahlis EyeDecrypt --- Hiding Information in Plain SightWe introduce EyeDecrypt, a novel content-privacy technology thatallows only legitimate users to visualize data being displayed onpublic-view rendering devices, such as electronic displays or printedsurfaces.  The data can be viewed on a closely-held personal device,such as a pair of smart glasses with a camera and heads-up display, ora smartphone. The decrypted data are displayed as an image overlay onthe personal device---a form of augmented reality.The technology consists of two main components: a visualizableencryption scheme and a dataglyphs-based visual encoding scheme forthe ciphertexts generated by the encryption scheme. We describe allaspects of EyeDecrypt, from security definitions, constructions andformal analyses, to implementation details of a prototype developed ona smartphone.Hoeteck Wee Public Key Encryption Against Related Key AttacksWe present efficient public-key encryption schemes resilient againstlinear related key attacks (RKA) under standard assumptions and in thestandard model. Specifically, we obtain encryption schemes based onhardness of factoring, BDDH and LWE that remain secure even against anadversary that may query the decryption oracle on linear shifts of theactual secret key. Moreover, the ciphertext overhead is only anadditive constant number of group elements.Moni Naor Zero-Knowledge and Secure Computation of Physical Properties Is it possible to prove that two DNA-fingerprints match, or that theydo not match, without revealing any further information about thefingerprints? Is it possible to prove that two warheads have the samedesign without revealing the design itself?Zero-knowledge is a familiar and well-developed concept in the digitaldomain. As we know, under reasonable cryptographic assumptions, anystatement that can be proved can also be proved with zero-knowledge.But zero-knowledge is not as well-understood in the context ofphysical problems: proving that a set of objects has a particularphysical property.In this talk I will describe recent work regarding Zero-Knowledge andSecure Computation of Physical Properties. In particular I will referto the above mentioned problems and the work of Glaser, Barak, andGoldston for arms treaty verification.Joint work with Ben Fisch and Daniel FreundTomasz Kazana One-Time Programs with Limited MemoryWe reinvestigate a notion of one-time programs introduced in theCRYPTO'08 paper by Goldwasser et al. A one-time program is a devicecontaining a program C, with the property that the program C can beexecuted on at most one input. Goldwasser et al. show how to implementone-time programs on devices equipped with special hardware-gadgetscalled one-time memory tokens. We provide an alternative constructionthat does not rely on the hardware gadgets. Instead, it is based onthe following assumptions: (1) the total amount of data that can leakfrom the device is bounded, and (2) the total memory on the device(available both to the honest user and to the attacker) is alsorestricted, which is essentially the model used recently byDziembowski et al. (TCC 2011, CRYPTO 2011) to construct one-timecomputable pseudorandom functions and key-evolution schemes.joint work with Konrad Durnoga, Stefan Dziembowski  and Michal Zajac Edoardo Persichetti Code-based public-key encryption resistant to key leakageSide-channel attacks are a major issue for implementation of securecryptographic schemes. Among these, key-leakage attacks describe ascenario in which an adversary is allowed to learn arbitrary informa-tion about the private key, the only constraint being the number ofbits learned. In this work, we study key-leakage resilience accordingto the model presented by Akavia, Goldwasser and Vaikuntanathan at TCC’09. As our main contribution, we present a code-based hash proofsystem; we obtain our construction by relaxing some of therequirements from the original definition of Cramer and Shoup. We thenpropose a leakage- resilient public-key encryption scheme that makesuse of this hash proof system. To do so, we adapt a framework featuredin a previous work by Alwen et al. regarding identity-based encryption(EUROCRYPT ’10). Our construction features error-correcting codes as atechnical tool, and, as opposed to previous work, does not require theuse of a randomness extractor.Stefan Mangard Bounding the Side-Channel Leakage of Security TokensThe talk first provides an overview of the architecture of a typicalsecurity token, such as a smart card. Based on a discussion of variousattacks on the components of the token, we elaborate on the questionto what degree it is possible to bound side-channel leakage inpractice. We discuss this question for two scenarios. The firstscenario is authentication and with a fixed key. The second scenariois communication based on a session key. We finally present thecryptographic part of the CIPURSE protocol which has been developedwith the goal of minimizing the side-channel leakage and which is usedas standard by the industry consortium OSPT alliance.Vladimir Kolesnikov MAC Precomputation with Applications to Secure MemoryWe present ShMAC (Shallow MAC), a fixed input length messageauthentication code that performs most of the computation prior tothe availability of the message. Specifically, ShMAC'smessage-dependent computation is much faster and smaller in hardwarethan the evaluation of a pseudorandom permutation (PRP), and can beimplemented by a small shallow circuit, while its precomputationconsists of one PRP evaluation. A main building block for ShMAC isthe notion of strong differential uniformity (SDU), which weintroduce, and which may be of independent interest. We show anefficient SDU construction built from previously considereddifferentially uniform functions. Our motivating application is asystem architecture where a hardware-secured processor uses memorycontrolled by an adversary.joint work with Juan A. Garay and Rae McLellanLeo Reyzin Computational Fuzzy Extractors Fuzzy extractors derive strong keys from noisy sources. Theirsecurity is defined information-theoretically, which limits the lengthof the derived key, sometimes making it too short to be useful. We askwhether it is possible to obtain longer keys by consideringcomputational security, and show the following.Negative Result: Noise tolerance in fuzzy extractors is usuallyachieved using an information-reconciliation component called "securesketch." The security of this component, which directly affects thelength of the resulting key, is subject to lower bounds from codingtheory. We show that, even when defined computationally, securesketches are still subject to lower bounds from coding theory.Specifically, we consider two computational relaxations of theinformation-theoretic security requirement of secure sketches, usingconditional HILL entropy and unpredictability entropy. For both caseswe show that computational secure sketches cannot outperform the bestinformation-theoretic secure sketches in the case of high-entropyHamming metric sources.Positive Result: We show that the negative result can be overcome byanalyzing computational fuzzy extractors directly. Namely, we showhow to build a computational fuzzy extractor whose output key lengthequals the entropy of the source (this is impossible in theinformation-theoretic setting). Our construction is based on thehardness of the Learning with Errors (LWE) problem, and is secure whenthe noisy source is uniform or symbol-fixing (that is, each dimensionis either uniform or fixed). As part of the security proof, we show aresult of independent interest, namely that the decision version ofLWE is secure when a small number of dimensions has no error.joint work with Benjamin Fuller and Xianrui MengElette Boyle Secure Computation Against Adaptive Auxiliary InformationWe study the problem of secure multiparty computation (MPC) in asetting where a cheating polynomial-time adversary can corrupt anarbitrary subset of parties and, in addition, learn arbitraryauxiliary information on the entire states of all honest parties(including their inputs and random coins), in an adaptive manner,throughout the protocol execution. We formalize a definition ofmultiparty computation secure against adaptive auxiliary information(AI-MPC), that intuitively guarantees that such an adversary learns nomore than the function output and the adaptive auxiliary information.In particular, if the auxiliary information contains only partial,noisy,'' or computationally uninvertible information on secretinputs, then only such information should be revealed. Ourdefinition is a natural generalization of the standard security notionfor MPC, where the adversary is restricted to (static) auxiliaryinformation on the inputs of the honest parties prior to the protocolexecution.We construct a universally composable AI-MPC protocol that realizesany (efficiently computable) functionality against maliciousadversaries in the common reference string (CRS) model, based on thelinear assumption over bilinear groups and the n-th residuosityassumption. Our protocol tolerates an arbitrary number of corruptions,and applies to both the two-party setting as well as the multipartysetting. Our result has interesting applications to the regime ofleakage-resilient cryptography; indeed, our result is already used asan essential tool for constructing leakage-resilient MPC protocols inthe leak-free preprocessing model [Boyle et. al. STOC'12].joint work with Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, and Amit Sahai. Alon Rosen Pseudorandom Functions and LatticesWe give direct constructions of pseudorandom function (PRF) familiesbased on conjectured hard lattice problems and learning problems. Ourconstructions are asymptotically efficient and highly parallelizablein a practical sense, i.e., they can be computed by simple, relativelysmall low-depth arithmetic or boolean circuits (e.g., in NC$^{1}$ or even TC$^{0}$). In addition, they are the first low-depthPRFs that have no known attack by efficient quantum algorithms.Central to our results is a new derandomization'' technique for thelearning with errors (LWE) problem which, in effect, generates theerror terms deterministically. Joint work with Abhishek Banerjee and Chris Peikert.Maciej Obremski Non-Malleable Codes and Two-Source Extractors We construct an efficient information-theoretically non-malleablecode in the split-state model for one-bit messages.  Non-malleablecodes were introduced recently by Dziembowski, Pietrzak and Wichs (ICS2010), as a general tool for storing messages securely on hardwarethat can be subject to tampering attacks.   Informally,  a code $(Enc: \cM \rightarrow \cL \times \cR, Dec : \cL \times \cR \rightarrow\cM)$ is non-malleable in the split-state model  if anyadversary, by manipulating independently $L$ and $R$ (where$(L,R)$ is an encoding of some message $M$), cannot obtain an encodingof a message $M'$ that is not equal to $M$ but is related'' $M$ insome way.  Until now it was unknown how to construct aninformation-theoretically secure code with such a property, even for$\cM = \bin$.  Our construction solves this problem.  Additionally, itis leakage-resilient, and the amount of leakage that we can toleratecan be an arbitrary  fraction $\xi < 1/4$ of the length of thecodeword.  Our code is based on the inner-product two-sourceextractor, but in general it can be instantiated by any two-sourceextractor that has large output and has the property of being flexible, which is a new notion that we define.We also show that the non-malleable codes for one-bit messages have anequivalent, perhaps simpler characterization, namely such codes can bedefined as follows:  if $M$ is chosen uniformly from $\bin$ then theprobability (in the experiment described above) that the outputmessage $M'$ is not equal to $M$ can be at most $1/2 + \epsilon$.joint work with Stefan Dziembowki and Tomasz KazanaStefan Dziembowski Proofs of Space and a Greener BitcoinProofs of work (PoW) have been suggested by Dwork and Naor (Crypto’92)as protection to a shared resource. The basic idea is to ask theservice requestor to dedicate some non-trivial amount of computationalwork to every request. The original applications included preventionof spam and protection against denial of service attacks, morerecently PoW have been used to prevent double spending in the Bitcoindigital currency system.In this work we put forward the concept of proofs of space (PoS),where a service requestor must dedicate a significant amount of diskspace as opposed to computation. We give constructions of PoS schemesin the random oracle model.We propose PoS as an alternative to PoW for schemes as Bitcoin.Currently, to avoid double spending, the userbase of Bitcoin mustdedicate more computational power than a potential adversary couldspend in every (less than one hour) time frame. This is expensive andthus hard to incentivise.In contrast, PoS only require users to dedicate disk space that theydon’t have use for at the moment. This space must be initialized oncewith a sorted list of random hash-values (which can be locallysampled), and participating in the proof only requires a lookup inthis sorted list.joint work with Sebastian Faust and Krzysztof PietrzakFrançois-Xavier Standaert A survey of physical assumptions in leakage-resilienceStarting with concrete examples of leakage functions and DPA attack, Iwill survey a number of assumptions that have been used to prove theleakage-resilience of cryptographic primitives, and discuss theirpractical relevance. In particular, I will focus on theinformativeness of the leakage function, its computational complexityand the assumption of independent leakage. I will then argue that someof these assumptions are difficult to fulfill by hardware engineers,and introduce an alternative assumption of simulatable leakage forblock ciphers, that is empirically verifiable and allows proving theleakage-resilience of efficient symmetric cryptographic constructions.Olivier Pereira Leakage-resilient cryptography under empirically verifiable assumptionsLeakage-resilient cryptography aims at formally proving the securityof cryptographic implementations against large classes of side-channeladversaries. One important challenge for such an approach to berelevant is to adequately connect the formal models used in the proofswith the practice of side-channel attacks. It raises the fundamentalproblem of finding reasonable restrictions of the leakage functionsthat can be empirically verified by evaluation laboratories. In thispaper, we introduce a new, realistic and empirically verifiableassumption of simulatable leakage, under which security proofs in thestandard model can be obtained. We finally illustrate our claims byanalyzing the physical security of an efficient pseudorandom generator(for which security could only be proven under a random oracle basedassumption so far). These positive results come at the cost of(algorithm-level) specialization, as our new assumption isspecifically defined for block ciphers. Nevertheless, since blockciphers are the main building block of many leakage-resilientcryptographic primitives, our results also open the way towards morerealistic constructions and proofs for other pseudorandom objects. joint work with François-Xavier Standaert and Yu YuDaniele Venturi On the connection between leakage tolerance and adaptive securityWe revisit the context of leakage-tolerant interactive protocols asdefined by Bitanski, Canetti and Halevi (TCC 2012). Our contributionscan be summarized as follows:For the purpose of secure message transmission, any encryptionprotocol with message space $\cM$ and secret key space $\cSK$tolerating poly-logarithmic leakage on the secret state of thereceiver must satisfy $|\cSK| \ge (1-\epsilon)|\cM|$, for every $0 <\epsilon \le 1$, and if $|\cSK| = |\cM|$, then the scheme must use afresh key pair to encrypt each message.More generally, we show that any $n$ party protocol toleratesleakage of $\approx\poly(\log\spar)$ bits from one party at the end ofthe protocol execution, if and only if the protocol has passiveadaptive security against an adaptive corruption of one party at theend of the protocol execution. This shows that as soon as a littleleakage is tolerated, one needs full adaptive security.In case more than one party can be corrupted, we get that leakagetolerance is equivalent to a weaker form of adaptivity, which we callsemi-adaptivity. Roughly, a protocol has semi-adaptivesecurity if there exist a simulator which can simulate the internalstate of corrupted parties, however, such a state is not required tobe indistinguishable from a real state, only that it would have leadto the simulated communication.All our results can be based on the solely assumption thatcollision-resistant function ensembles exist.joint work with Jesper Buus Nielsen and Angela ZottarelJoel Alwen Learning with Rounding, Revisited New Reduction, Properties and ApplicationsThe learning with rounding (LWR) problem, introduced by Banerjee,Peikert and Rosen at EUROCRYPT ’12, is a variant of learning witherrors (LWE), where one replaces random errors with deterministicrounding. The LWR problem was shown to be as hard as LWE for a settingof parameters where the modulus and modulus-to-error ratio aresuper-polynomial. In this work we resolve the main open problem andgive a new reduction that works for a larger range of parameters,allowing for a polynomial modulus and modulus-to-error ratio. Inparticular, a smaller modulus gives us greater eﬃciency, and a smallermodulus-to-error ratio gives us greater security, which now followsfrom the worst-case hardness of GapSVP with polynomial (rather thansuper- polynomial) approximation factors.As a tool in the reduction, we show that there is a “lossy mode” forthe LWR problem, in which LWR samples only reveal partial informationabout the secret. This property gives us several interesting newapplications, including a proof that LWR remains secure with weaklyrandom secrets of suﬃcient min-entropy, and very simple constructionsof deter- ministic encryption, lossy trapdoor functions and reusableextractors. Our approach is inspired by a technique of Goldwasser etal. from ICS ’10, which implicitly showed the existence of a “lossymode” for LWE. By reﬁning this technique, we also improve on theparameters of that work to only requiring a polynomial (instead ofsuper-polynomial) modulus and modulus-to-error ratio.join work with Stephan Krenn, Krzysztof Pietrzak, and Daniel WichsAntonio Faonio How to Authenticate From a Fully Compromised SystemWe propose an efficient identification scheme in Bounded RetrievalModel based on standard cryptographic assumptions (RSA andFactoring). We achieve this by first constructing an honest-verifier(computational) zero-knowledge (HVZK) proof of storage (PoS) which,roughly speaking, guarantees that one can efficiently verify theintegrity of remotely stored data without learning any informationabout the data. We then provide a general methodology (i.e., acompiler) that transforms any HVZK PoS into an identification schemein the BRM. Furthermore, we provide a prototype implementation of ourscheme and show that it is indeed efficient and deployable. This workwas submitted to CCS 2013.Joint work with Giuseppe Ateniese, Seny Kamara and Jonathan Katz.Daniel Masny Man-in-the-Middle Secure Authentication Schemes from Weak PRFsThe talk will be about constructing a 3-round symmetric-keyauthentication protocol based upon weak-PRFs that is secure againstman-in-the-middle attacks. Almost the same construction can be used forthe more general class of randomized weak-PRFs including functions basedupon the classical LPN problem as well as its variants for exampleToeplitz-LPN and Ring-LPN. The construction is very simple andefficient, such that it is very competitive compared to actively secureschemes based upon similar assumptions.Maciej Skórski Some problems in computational entropyIn the talk we will give a short overview of computationalgeneralizations of the notions of entropy. Focusing on the mostcommonly used definitions, we will address two special problems in thecomputational entropy:the so called "chain rule" - an estimate on the amount of entropy left after some leakage.  Intuitively, small leakage shouldn't affectthe pseudorandomness much.  Can we formalize this intuition for computational entropy?some surprisingly deep connections between the computational entropy and geometry. We will see, that the notion of the computationalentropy is much more geometrical that it might look like. What does it have to do with derandomization?Konrad Durnoga On non-malleable extractors and computing discrete logarithms in bulk We give an unconditional construction of a non-malleable extractorimproving the solution from the recent paper Privacy Amplification andNon-Malleable Extractors via Character Sums by Dodis et al. There, theauthors provide the first explicit example of a non-malleableextractor -- a cryptographic primitive that significantly strengthensthe notion of a classical randomness extractor. In order to make theextractor robust, so that it runs in polynomial time and outputs alinear number of bits, they rely on a certain conjecture on the leastprime in a residue class. In the talk we present a modification oftheir construction that allows to remove that dependency. As anauxiliary result, which can be of independent interest, we show anefficiently computable bijection between any order M subgroup of themultiplicative group of a finite field and a set of integers modulo M,under an assumption that M is a smooth number. Also, we provide aversion of Shanks' baby-step giant-step method for solving multipleinstances of the discrete logarithm problem in the multiplicativegroup of a prime field. It performs better than the generic algorithmwhen run on a machine without constant-time access to each memorycell, e.g., on a classical Turing machine.Joint work with Bartosz Źrałek.Krzysztof Pietrzak How to fake auxiliary input and applicationsWe show that for any joint distribution (X,A) and any family F ofdistinguishers, e.g. polynomial size circuits, there exists anefficient (deterministic) simulator h such that F cannot distinguish(X,A) from (X,h(X)), i.e. for all f in F we have |E[f(X,A)]-E(f(X,h(X))]|