Workshop on Leakage, Tampering and Viruses
26 June 2013, Warsaw, Poland
ABSTRACTS
Yevgeniy Dodis Key Derivation Without Entropy Waste
We revisit the classical question of converting an imperfect source X of minentropy k into a usable mbit cryptographic key for some underlying application P. If P has security delta (against some class of attackers) with a uniformly random mbit key, we seek to design a key derivation function (KDF) h that allows us to use R=h(X) as the key for P and results in comparable security delta' close to delta. Seeded randomness extractors provide a generic way to solve this problem provided that k > m + 2*log(1/delta), and this lower bound on k (called "RTbound") is known to be tight in general. Unfortunately, in many situation the "waste" of 2*log(1/delta) bits of entropy is significant, motivating the question of designing KDFs with less waste for important special classes of sources X or applications P.I will discuss several positive and negative results in this regard.
The most surprising of them will be a positive result for all unpredictability applications P, yielding a provably secure KDF with entropy "waste" only loglog(1/delta)  an expenential improvement over
Abhishek Jain What Information is Leaked Under Concurrent Composition?
Traditional secrity notions for cryptographic protocols only promise security if a single protocol is executed in a "closed environment." Today's world, however, is driven by networks  the most important example being the Internet. In such an environment, several protocol instances may be executed concurrently, and an adversary may be able to perform coordinated attacks by corrupting parties across various protocol sessions. Over the last decade, a tremendous amount of effort has been made to obtain protocols that remain secure even under concurrent execution. Nevertheless, designing protocols that guarantee strong and meaningful security, without any trust assumptions, remains a challenging problem. In this talk, I will describe a new approach (inspired by the regime of leakageresilient cryptography) to precisely quantify the amount of information that an adversary can learn by performing concurrent attacks. In particular, I will show (1) how positive results in leakageresilient cryptography can be used to lower bound the amount of information loss in the concurrent setting, and then (2) how classic setcovering problem can be used to guarantee that in a concurrent setting, standard security of most of the protocol sessions remains intact, without any trust assumptions. Joint work with Vipul Goyal and Divya Gupta, to appear at CRYPTO'13
Yu Yu DPA attacks on small embedded devices and the applications of unpredictability pseudoentropy
In the first half of this talk, we report on successful sidechannel attacks against several (old but still deployed) implementations of the COMP1281 algorithm. Such attacks are able to recover cryptographic keys with limited time and data, by measuring the power consumption of the devices manipulating them, hence allowing cards cloning and communications eavesdropping. This study allows us to put forward the long term issues raised by the deployment of cryptographic implementations. It provides a motivation for improving the physical security of small embedded devices early in their development. We also use it to argue that public standards for cryptographic algorithms and transparent physical security evaluation methodologies are important tools for this purpose. This is joint work with Yuanyuan Zhou, FrancoisXavier Standaert, JeanJacques Quisquater. In the other half of the talk, we show an application of unpredictablity pseudoentropy (a useful notion in leakageresilient cryptography) to the problem of constructing pseudorandom generators from regular oneway functions. For any knownregular oneway function (on $n$bit inputs) that is known to be $\eps$hard to invert, we give a neat (and tighter) proof for the folklore construction of pseudorandom generator of seed length $\Theta(n)$ by making a single call to the underlying oneway function. For any unknownregular oneway function with known $\eps$hardness, we give a new construction with seed length $\Theta(n)$ and $O(n/\log{(1/\eps)})$ calls. Here the number of calls is also optimal by matching the lower bounds of Holenstein and Sinha [FOCS 2012].
Yevgeniy Vahlis EyeDecrypt  Hiding Information in Plain Sight
We introduce EyeDecrypt, a novel contentprivacy technology that allows only legitimate users to visualize data being displayed on publicview rendering devices, such as electronic displays or printed surfaces. The data can be viewed on a closelyheld personal device, such as a pair of smart glasses with a camera and headsup display, or a smartphone. The decrypted data are displayed as an image overlay on the personal devicea form of augmented reality. The technology consists of two main components: a visualizable encryption scheme and a dataglyphsbased visual encoding scheme for the ciphertexts generated by the encryption scheme. We describe all aspects of EyeDecrypt, from security definitions, constructions and formal analyses, to implementation details of a prototype developed on
Hoeteck Wee Public Key Encryption Against Related Key Attacks
We present efficient publickey encryption schemes resilient against linear related key attacks (RKA) under standard assumptions and in the standard model. Specifically, we obtain encryption schemes based on hardness of factoring, BDDH and LWE that remain secure even against an adversary that may query the decryption oracle on linear shifts of the actual secret key. Moreover, the ciphertext overhead is only an additive constant number of group elements.
Moni Naor ZeroKnowledge and Secure Computation of Physical Properties
Is it possible to prove that two DNAfingerprints match, or that they do not match, without revealing any further information about the fingerprints? Is it possible to prove that two warheads have the same design without revealing the design itself? Zeroknowledge is a familiar and welldeveloped concept in the digital domain. As we know, under reasonable cryptographic assumptions, any statement that can be proved can also be proved with zeroknowledge. But zeroknowledge is not as wellunderstood in the context of physical problems: proving that a set of objects has a particular In this talk I will describe recent work regarding ZeroKnowledge and Secure Computation of Physical Properties. In particular I will refer to the above mentioned problems and the work of Glaser, Barak, and Goldston for arms treaty verification. Joint work with Ben Fisch and Daniel Freund
Tomasz Kazana OneTime Programs with Limited Memory We reinvestigate a notion of onetime programs introduced in the CRYPTO'08 paper by Goldwasser et al. A onetime program is a device containing a program C, with the property that the program C can be executed on at most one input. Goldwasser et al. show how to implement onetime programs on devices equipped with special hardwaregadgets called onetime memory tokens. We provide an alternative construction that does not rely on the hardware gadgets. Instead, it is based on the following assumptions: (1) the total amount of data that can leak from the device is bounded, and (2) the total memory on the device (available both to the honest user and to the attacker) is also restricted, which is essentially the model used recently by Dziembowski et al. (TCC 2011, CRYPTO 2011) to construct onetime computable pseudorandom functions and keyevolution schemes.
joint work with Konrad Durnoga, Stefan Dziembowski and Michal Zajac
Edoardo Persichetti Codebased publickey encryption resistant to key leakage
Sidechannel attacks are a major issue for implementation of secure cryptographic schemes. Among these, keyleakage attacks describe a scenario in which an adversary is allowed to learn arbitrary informa tion about the private key, the only constraint being the number of bits learned. In this work, we study keyleakage resilience according to the model presented by Akavia, Goldwasser and Vaikuntanathan at TCC ’09. As our main contribution, we present a codebased hash proof system; we obtain our construction by relaxing some of the requirements from the original definition of Cramer and Shoup. We then propose a leakage resilient publickey encryption scheme that makes use of this hash proof system. To do so, we adapt a framework featured in a previous work by Alwen et al. regarding identitybased encryption (EUROCRYPT ’10). Our construction features errorcorrecting codes as a technical tool, and, as opposed to previous work, does not require the use of a randomness extractor.
Stefan Mangard Bounding the SideChannel Leakage of Security TokensThe talk first provides an overview of the architecture of a typical security token, such as a smart card. Based on a discussion of various attacks on the components of the token, we elaborate on the question to what degree it is possible to bound sidechannel leakage in practice. We discuss this question for two scenarios. The first scenario is authentication and with a fixed key. The second scenario is communication based on a session key. We finally present the cryptographic part of the CIPURSE protocol which has been developed with the goal of minimizing the sidechannel leakage and which is used as standard by the industry consortium OSPT alliance.
Vladimir Kolesnikov MAC Precomputation with Applications to Secure Memory
We present ShMAC (Shallow MAC), a fixed input length message authentication code that performs most of the computation prior to the availability of the message. Specifically, ShMAC's messagedependent computation is much faster and smaller in hardware than the evaluation of a pseudorandom permutation (PRP), and can be implemented by a small shallow circuit, while its precomputation consists of one PRP evaluation. A main building block for ShMAC is the notion of strong differential uniformity (SDU), which we introduce, and which may be of independent interest. We show an efficient SDU construction built from previously considered differentially uniform functions. Our motivating application is a system architecture where a hardwaresecured processor uses memory controlled by an adversary.
joint work with Juan A. Garay and Rae McLellan
Leo Reyzin Computational Fuzzy Extractors
Fuzzy extractors derive strong keys from noisy sources. Their security is defined informationtheoretically, which limits the length of the derived key, sometimes making it too short to be useful. We ask whether it is possible to obtain longer keys by considering computational security, and show the following.
 Negative Result: Noise tolerance in fuzzy extractors is usually
achieved using an informationreconciliation component called "secure sketch." The security of this component, which directly affects the length of the resulting key, is subject to lower bounds from coding theory. We show that, even when defined computationally, secure sketches are still subject to lower bounds from coding theory. Specifically, we consider two computational relaxations of the informationtheoretic security requirement of secure sketches, using conditional HILL entropy and unpredictability entropy. For both cases we show that computational secure sketches cannot outperform the best informationtheoretic secure sketches in the case of highentropy Hamming metric sources.
 Positive Result: We show that the negative result can be overcome by
analyzing computational fuzzy extractors directly. Namely, we show how to build a computational fuzzy extractor whose output key length equals the entropy of the source (this is impossible in the informationtheoretic setting). Our construction is based on the hardness of the Learning with Errors (LWE) problem, and is secure when the noisy source is uniform or symbolfixing (that is, each dimension is either uniform or fixed). As part of the security proof, we show a result of independent interest, namely that the decision version of LWE is secure when a small number of dimensions has no error.
joint work with Benjamin Fuller and Xianrui Meng
Elette Boyle Secure Computation Against Adaptive Auxiliary Information We study the problem of secure multiparty computation (MPC) in a setting where a cheating polynomialtime adversary can corrupt an arbitrary subset of parties and, in addition, learn arbitrary auxiliary information on the entire states of all honest parties (including their inputs and random coins), in an adaptive manner, throughout the protocol execution. We formalize a definition of multiparty computation secure against adaptive auxiliary information (AIMPC), that intuitively guarantees that such an adversary learns no more than the function output and the adaptive auxiliary information.
In particular, if the auxiliary information contains only partial, ``noisy,'' or computationally uninvertible information on secret inputs, then only such information should be revealed. Our definition is a natural generalization of the standard security notion for MPC, where the adversary is restricted to (static) auxiliary information on the inputs of the honest parties prior to the protocol execution.
We construct a universally composable AIMPC protocol that realizes any (efficiently computable) functionality against malicious adversaries in the common reference string (CRS) model, based on the linear assumption over bilinear groups and the nth residuosity assumption. Our protocol tolerates an arbitrary number of corruptions, and applies to both the twoparty setting as well as the multiparty setting. Our result has interesting applications to the regime of leakageresilient cryptography; indeed, our result is already used as an essential tool for constructing leakageresilient MPC protocols in the leakfree preprocessing model [Boyle et. al. STOC'12]. joint work with Sanjam Garg, Abhishek Jain, Yael Tauman Kalai,
Alon Rosen Pseudorandom Functions and Lattices
We give direct constructions of pseudorandom function (PRF) families based on conjectured hard lattice problems and learning problems. Our constructions are asymptotically efficient and highly parallelizable in a practical sense, i.e., they can be computed by simple, relatively small lowdepth arithmetic or boolean circuits (e.g., in NC$^{1}$ or even TC$^{0}$). In addition, they are the first lowdepth PRFs that have no known attack by efficient quantum algorithms. Central to our results is a new ``derandomization'' technique for the learning with errors (LWE) problem which, in effect, generates the error terms deterministically.
Joint work with Abhishek Banerjee and Chris Peikert.
Maciej Obremski NonMalleable Codes and TwoSource Extractors We construct an efficient informationtheoretically nonmalleable code in the splitstate model for onebit messages. Nonmalleable codes were introduced recently by Dziembowski, Pietrzak and Wichs (ICS 2010), as a general tool for storing messages securely on hardware that can be subject to tampering attacks. Informally, a code $(Enc : \cM \rightarrow \cL \times \cR, Dec : \cL \times \cR \rightarrow \cM)$ is nonmalleable in the splitstate model if any adversary, by manipulating independently $L$ and $R$ (where $(L,R)$ is an encoding of some message $M$), cannot obtain an encoding of a message $M'$ that is not equal to $M$ but is ``related'' $M$ in some way. Until now it was unknown how to construct an informationtheoretically secure code with such a property, even for $\cM = \bin$. Our construction solves this problem. Additionally, it is leakageresilient, and the amount of leakage that we can tolerate can be an arbitrary fraction $\xi < 1/4$ of the length of the codeword. Our code is based on the innerproduct twosource extractor, but in general it can be instantiated by any twosource extractor that has large output and has the property of being flexible, which is a new notion that we define. We also show that the nonmalleable codes for onebit messages have an equivalent, perhaps simpler characterization, namely such codes can be defined as follows: if $M$ is chosen uniformly from $\bin$ then the probability (in the experiment described above) that the output message $M'$ is not equal to $M$ can be at most $1/2 + \epsilon$.
joint work with Stefan Dziembowki and Tomasz Kazana
Stefan Dziembowski Proofs of Space and a Greener Bitcoin
Proofs of work (PoW) have been suggested by Dwork and Naor (Crypto’92) as protection to a shared resource. The basic idea is to ask the service requestor to dedicate some nontrivial amount of computational work to every request. The original applications included prevention of spam and protection against denial of service attacks, more recently PoW have been used to prevent double spending in the Bitcoin digital currency system.
In this work we put forward the concept of proofs of space (PoS), where a service requestor must dedicate a significant amount of disk space as opposed to computation. We give constructions of PoS schemes in the random oracle model.
We propose PoS as an alternative to PoW for schemes as Bitcoin. Currently, to avoid double spending, the userbase of Bitcoin must dedicate more computational power than a potential adversary could spend in every (less than one hour) time frame. This is expensive and thus hard to incentivise.
In contrast, PoS only require users to dedicate disk space that they don’t have use for at the moment. This space must be initialized once with a sorted list of random hashvalues (which can be locally sampled), and participating in the proof only requires a lookup in this sorted list.
joint work with Sebastian Faust and Krzysztof Pietrzak
FrançoisXavier Standaert A survey of physical assumptions in leakageresilience
Starting with concrete examples of leakage functions and DPA attack, I will survey a number of assumptions that have been used to prove the leakageresilience of cryptographic primitives, and discuss their practical relevance. In particular, I will focus on the informativeness of the leakage function, its computational complexity and the assumption of independent leakage. I will then argue that some of these assumptions are difficult to fulfill by hardware engineers, and introduce an alternative assumption of simulatable leakage for block ciphers, that is empirically verifiable and allows proving the leakageresilience of efficient symmetric cryptographic constructions.
Olivier Pereira Leakageresilient cryptography under empirically verifiable assumptionsLeakageresilient cryptography aims at formally proving the security of cryptographic implementations against large classes of sidechannel adversaries. One important challenge for such an approach to be relevant is to adequately connect the formal models used in the proofs with the practice of sidechannel attacks. It raises the fundamental problem of finding reasonable restrictions of the leakage functions that can be empirically verified by evaluation laboratories. In this paper, we introduce a new, realistic and empirically verifiable assumption of simulatable leakage, under which security proofs in the standard model can be obtained. We finally illustrate our claims by analyzing the physical security of an efficient pseudorandom generator (for which security could only be proven under a random oracle based assumption so far). These positive results come at the cost of (algorithmlevel) specialization, as our new assumption is specifically defined for block ciphers. Nevertheless, since block ciphers are the main building block of many leakageresilient cryptographic primitives, our results also open the way towards more realistic constructions and proofs for other pseudorandom objects.
joint work with FrançoisXavier Standaert and Yu Yu
Daniele Venturi On the connection between leakage tolerance and adaptive securityWe revisit the context of leakagetolerant interactive protocols as defined by Bitanski, Canetti and Halevi (TCC 2012). Our contributions can be summarized as follows:
 For the purpose of secure message transmission, any encryption
protocol with message space $\cM$ and secret key space $\cSK$ tolerating polylogarithmic leakage on the secret state of the receiver must satisfy $\cSK \ge (1\epsilon)\cM$, for every $0 < \epsilon \le 1$, and if $\cSK = \cM$, then the scheme must use a fresh key pair to encrypt each message.
 More generally, we show that any $n$ party protocol tolerates
leakage of $\approx\poly(\log\spar)$ bits from one party at the end of the protocol execution, if and only if the protocol has passive adaptive security against an adaptive corruption of one party at the end of the protocol execution. This shows that as soon as a little leakage is tolerated, one needs full adaptive security.
 In case more than one party can be corrupted, we get that leakage
tolerance is equivalent to a weaker form of adaptivity, which we call semiadaptivity. Roughly, a protocol has semiadaptive security if there exist a simulator which can simulate the internal state of corrupted parties, however, such a state is not required to be indistinguishable from a real state, only that it would have lead to the simulated communication.
All our results can be based on the solely assumption that collisionresistant function ensembles exist.
joint work with Jesper Buus Nielsen and Angela Zottarel
Joel Alwen Learning with Rounding, Revisited New Reduction, Properties and Applications
The learning with rounding (LWR) problem, introduced by Banerjee, Peikert and Rosen at EUROCRYPT ’12, is a variant of learning with errors (LWE), where one replaces random errors with deterministic rounding. The LWR problem was shown to be as hard as LWE for a setting of parameters where the modulus and modulustoerror ratio are superpolynomial. In this work we resolve the main open problem and give a new reduction that works for a larger range of parameters, allowing for a polynomial modulus and modulustoerror ratio. In particular, a smaller modulus gives us greater eﬃciency, and a smaller modulustoerror ratio gives us greater security, which now follows from the worstcase hardness of GapSVP with polynomial (rather than super polynomial) approximation factors.
As a tool in the reduction, we show that there is a “lossy mode” for the LWR problem, in which LWR samples only reveal partial information about the secret. This property gives us several interesting new applications, including a proof that LWR remains secure with weakly random secrets of suﬃcient minentropy, and very simple constructions of deter ministic encryption, lossy trapdoor functions and reusable extractors. Our approach is inspired by a technique of Goldwasser et al. from ICS ’10, which implicitly showed the existence of a “lossy mode” for LWE. By reﬁning this technique, we also improve on the parameters of that work to only requiring a polynomial (instead of superpolynomial) modulus and modulustoerror ratio.
join work with Stephan Krenn, Krzysztof Pietrzak, and Daniel Wichs
Antonio Faonio How to Authenticate From a Fully Compromised System
We propose an efficient identification scheme in Bounded Retrieval Model based on standard cryptographic assumptions (RSA and Factoring). We achieve this by first constructing an honestverifier (computational) zeroknowledge (HVZK) proof of storage (PoS) which, roughly speaking, guarantees that one can efficiently verify the integrity of remotely stored data without learning any information about the data. We then provide a general methodology (i.e., a compiler) that transforms any HVZK PoS into an identification scheme in the BRM. Furthermore, we provide a prototype implementation of our scheme and show that it is indeed efficient and deployable. This work was submitted to CCS 2013.
Joint work with Giuseppe Ateniese, Seny Kamara and Jonathan Katz.
Daniel Masny ManintheMiddle Secure Authentication Schemes from Weak PRFs
The talk will be about constructing a 3round symmetrickey authentication protocol based upon weakPRFs that is secure against maninthemiddle attacks. Almost the same construction can be used for the more general class of randomized weakPRFs including functions based upon the classical LPN problem as well as its variants for example ToeplitzLPN and RingLPN. The construction is very simple and efficient, such that it is very competitive compared to actively secure schemes based upon similar assumptions.
Maciej Skórski Some problems in computational entropy In the talk we will give a short overview of computational generalizations of the notions of entropy. Focusing on the most commonly used definitions, we will address two special problems in the
 the so called "chain rule"  an estimate on the amount of entropy
left after some leakage. Intuitively, small leakage shouldn't affect the pseudorandomness much. Can we formalize this intuition for computational entropy?
 some surprisingly deep connections between the computational
entropy and geometry. We will see, that the notion of the computational entropy is much more geometrical that it might look like. What does it have to do with derandomization?
Konrad Durnoga On nonmalleable extractors and computing discrete logarithms in bulk
We give an unconditional construction of a nonmalleable extractor improving the solution from the recent paper Privacy Amplification and NonMalleable Extractors via Character Sums by Dodis et al. There, the authors provide the first explicit example of a nonmalleable extractor  a cryptographic primitive that significantly strengthens the notion of a classical randomness extractor. In order to make the extractor robust, so that it runs in polynomial time and outputs a linear number of bits, they rely on a certain conjecture on the least prime in a residue class. In the talk we present a modification of their construction that allows to remove that dependency. As an auxiliary result, which can be of independent interest, we show an efficiently computable bijection between any order M subgroup of the multiplicative group of a finite field and a set of integers modulo M, under an assumption that M is a smooth number. Also, we provide a version of Shanks' babystep giantstep method for solving multiple instances of the discrete logarithm problem in the multiplicative group of a prime field. It performs better than the generic algorithm when run on a machine without constanttime access to each memory cell, e.g., on a classical Turing machine.
Joint work with Bartosz Źrałek.
Krzysztof Pietrzak How to fake auxiliary input and applications
We show that for any joint distribution (X,A) and any family F of distinguishers, e.g. polynomial size circuits, there exists an efficient (deterministic) simulator h such that F cannot distinguish (X,A) from (X,h(X)), i.e. for all f in F we have E[f(X,A)]E(f(X,h(X))]<eps.
We'll discuss several applications including leakageresilience and zeroknowledge.
Sebastian Faust Efficient Leakage Resilient Symmetric Cryptography
Leakage resilient cryptography attempts to incorporate sidechannel leakage into the blackbox security model and designs cryptographic schemes that are provably secure within it. Informally, a scheme is leakageresilient if it remains secure even if an adversary learns a bounded amount of arbitrary information about the schemes internal state. Unfortunately, most leakage resilient schemes are unnecessarily complicated in order to achieve strong provable security guarantees. As advocated by Yu et al. [CCS'10], this mostly is an artefact of the security proof and in practice much simpler construction may already suffice to protect against realistic sidechannel attacks. In this paper, we show that indeed for simpler constructions leakageresilience can be obtained when we aim for relaxed security notions where the leakagefunctions and/or the inputs to the primitive are chosen nonadaptively. For example, we show that a three round Feistel network instantiated with a leakage resilient PRF yields a leakage resilient PRP if the inputs are chosen nonadaptively (This complements the result of Dodis and Pietrzak [CRYPTO'10] who show that if a adaptive queries are allowed, a superlogarithmic number of rounds is necessary.)
We also show that a minor variation of the classical GGM construction gives a leakage resilient PRF if both, the leakagefunction and the inputs, are chosen nonadaptively.
Joint work with Krzysztof Pietrzak and Joachim Schipper
Rafael Pass On the (Im)Possibility of TamperResilient Cryptography: Using Fourier Analysis in Computer Viruses
We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider ptampering attackers that may efficiently tamper with each bit of the honest parties' random tape with probability p but have to do so in an ``online'' fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zeroknowledge protocol can be ``broken'' with probability $p$ by a $p$tampering attacker. The core of this result is a new Fourier analytic technique for biasing the output of boundedvalue functions, which may be of independent interest.
We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of oneway functions, such primitives can be made resilient to (1/\poly(n))tampering attacks where $n$ is the security parameter.
joint work with Per Austrin, KaiMin Chung, Mohammad Mahmoody and Karn Seth
Adi Akavia Distributed Public Key Schemes Secure against Continual Leakage
We study distributed public key schemes secure against continual memory leakage. In the distributed scheme the secret key is shared among two computing devices communicating over a public channel, and the decryption operation is computed by a simple 2party protocol between the devices. Similarly, the secret key shares is periodically refreshed by a simple 2party protocol executed in discrete time periods throughout the lifetime of the system. The leakage adversary can choose pairs, one per device, of polynomial time computable length shrinking (or entropy shrinking) functions, and receive the value of the respective function on the internal state of the respective device (namely, on its secret share, internal randomness, and results of intermediate computation). We present distributed public key encryption (DPKE) and distributed identity based encryption (DIBE) schemes that are secure against continual memory leakage, under the Bilinear Decisional DiffieHellman and 2Linear assumptions. Our schemes have the following properties:
 The DPKE and DIBE tolerate leakage at all times, including during
refresh, where the leakage rate is (1/2o(1),1) of the respective devices during refresh, and ((1o(1)),1) at all other times (post key generation).
 The DIBE tolerates leakage from both the master secret key and
the identity based secret keys.
 The DPKE scheme is CCA2secure against continual memory leakage.
Joint work with Shafi Goldswasser and Carmit Hazay.

