Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2887
3
Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo
Thomas Johansson (Ed.)
Fast Software Encryption 10th International Workshop, FSE 2003 Lund, Sweden, February 24-26, 2003 Revised Papers
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editor Thomas Johansson Lund University, Department of Information Technology Box 118, SE-221 00 Lund, Sweden E-mail:
[email protected] Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at .
CR Subject Classification (1998): E.3, F.2.1, E.4, G.4 ISSN 0302-9743 ISBN 3-540-20449-0 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springeronline.com © International Association for Cryptologic Research 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP Berlin GmbH Printed on acid-free paper SPIN: 10966228 06/3142 543210
Preface
Fast Software Encryption is now a 10-year-old workshop on symmetric cryptography, including the design and cryptanalysis of block and stream ciphers, as well as hash functions. The first FSE workshop was held in Cambridge in 1993, followed by Leuven in 1994, Cambridge in 1996, Haifa in 1997, Paris in 1998, Rome in 1999, New York in 2000, Yokohama in 2001, and Leuven in 2002. This Fast Software Encryption workshop, FSE 2003, was held February 24– 26, 2003 in Lund, Sweden. The workshop was sponsored by IACR (International Association for Cryptologic Research) and organized by the General Chair, Ben Smeets, in cooperation with the Department of Information Technology, Lund University. This year a total of 71 papers were submitted to FSE 2003. After a two-month reviewing process, 27 papers were accepted for presentation at the workshop. In addition, we were fortunate to have in the program an invited talk by James L. Massey. The selection of papers was difficult and challenging work. Each submission was refereed by at least three reviewers. I would like to thank the program committee members, who all did an excellent job. In addition, I gratefully acknowledge the help of a number of colleagues who provided reviews for the program committee. They are: Kazumaro Aoki, Alex Biryukov, Christophe De Canni`ere, Nicolas Courtois, Jean-Charles Faug`ere, Rob Johnson, Pascal Junod, Joseph Lano, Marine Minier, Elisabeth Oswald, H˚ avard Raddum, and Markku-Juhani O. Saarinen. The local arrangements for the workshop were managed by a committee consisting of Patrik Ekdahl, Lena M˚ ansson and Laila Lembke. I would like to thank them all for their hard work. Finally, we are grateful for the financial support for the workshop provided by Business Security, Ericsson Mobile Platforms, and RSA Security.
August 2003
Thomas Johansson
FSE 2003
February 24–26, 2003, Lund, Sweden Sponsored by the International Association for Cryptologic Research in cooperation with Department of Information Technology, Lund University, Sweden Program Chair Thomas Johansson (Lund University, Sweden) General Chair Ben Smeets (Ericsson, Sweden)
Program Committee Ross Anderson Anne Canteaut Joan Daemen Cunsheng Ding Hans Dobbertin Henri Gilbert Jovan Golic Lars Knudsen Helger Lipmaa Mitsuru Matsui Willi Meier Kaisa Nyberg Bart Preneel Vincent Rijmen Matt Robshaw Serge Vaudenay David Wagner
Cambridge University, UK Inria, France Protonworld, Belgium Hong Kong University of Science and Technology University of Bochum, Germany France Telecom, France Gemplus, Italy Technical University of Denmark Helsinki University of Technology, Finland Mitsubishi Electric, Japan Fachhochschule Aargau, Switzerland Nokia, Finland K.U. Leuven, Belgium Cryptomathic, Belgium Royal Holloway, University of London, UK EPFL, Switzerland U.C. Berkeley, USA
Table of Contents
Block Cipher Cryptanalysis Cryptanalysis of IDEA-X/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H˚ avard Raddum (University of Bergen)
1
Differential-Linear Cryptanalysis of Serpent . . . . . . . . . . . . . . . . . . . . . . . . . . . Eli Biham, Orr Dunkelman, and Nathan Keller (Technion)
9
Rectangle Attacks on 49-Round SHACAL-1 . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Eli Biham, Orr Dunkelman, and Nathan Keller (Technion) Cryptanalysis of Block Ciphers Based on SHA-1 and MD5 . . . . . . . . . . . . . . 36 Markku-Juhani O. Saarinen (Helsinki University of Technology) Analysis of Involutional Ciphers: Khazad and Anubis . . . . . . . . . . . . . . . . . . . 45 Alex Biryukov (Katholieke Universiteit Leuven)
Boolean Functions and S-Boxes On Plateaued Functions and Their Constructions . . . . . . . . . . . . . . . . . . . . . . 54 Claude Carlet and Emmanuel Prouff (INRIA) Linear Redundancy in S-Boxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Joanne Fuller and William Millan (Queensland University of Technology)
Stream Cipher Cryptanalysis Loosening the KNOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Antoine Joux and Fr´ed´eric Muller (DCSSI Crypto Lab) On the Resynchronization Attack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Jovan Dj. Goli´c (Telecom Italia Lab) and Guglielmo Morgari (Telsy Elettronica e Telecomunicazioni) Cryptanalysis of Sober-t32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Steve Babbage (Vodafone Group Research & Development), Christophe De Canni`ere, Joseph Lano, Bart Preneel, and Joos Vandewalle (Katholieke Universiteit Leuven)
MACs OMAC: One-Key CBC MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Tetsu Iwata and Kaoru Kurosawa (Ibaraki University)
VIII
Table of Contents
A Concrete Security Analysis for 3GPP-MAC . . . . . . . . . . . . . . . . . . . . . . . . . 154 Dowon Hong, Ju-Sung Kang (ETRI), Bart Preneel (Katholieke Universiteit Leuven), and Heuisu Ryu (ETRI) New Attacks against Standardized MACs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Antoine Joux, Guillaume Poupard (DCSSI), and Jacques Stern (Ecole normale sup´erieure) Analysis of RMAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Lars R. Knudsen (Technical University of Denmark) and Tadayoshi Kohno (UCSD)
Side Channel Attacks A Generic Protection against High-Order Differential Power Analysis . . . . . 192 Mehdi-Laurent Akkar and Louis Goubin (Schlumberger Smart Cards) A New Class of Collision Attacks and Its Application to DES . . . . . . . . . . . . 206 Kai Schramm, Thomas Wollinger, and Christof Paar (Ruhr-Universit¨ at Bochum)
Block Cipher Theory Further Observations on the Structure of the AES Algorithm . . . . . . . . . . . . 223 Beomsik Song and Jennifer Seberry (University of Wollongong) Optimal Key Ranking Procedures in a Statistical Cryptanalysis . . . . . . . . . . 235 Pascal Junod and Serge Vaudenay (Swiss Federal Institute of Technology, Lausanne) Improving the Upper Bound on the Maximum Differential and the Maximum Linear Hull Probability for SPN Structures and AES . . 247 Sangwoo Park (National Security Research Institute), Soo Hak Sung (Pai Chai University), Sangjin Lee, and Jongin Lim (CIST) Linear Approximations of Addition Modulo 2n . . . . . . . . . . . . . . . . . . . . . . . . . 261 Johan Wall´en (Helsinki University of Technology) Block Ciphers and Systems of Quadratic Equations . . . . . . . . . . . . . . . . . . . . 274 Alex Biryukov and Christophe De Canni`ere (Katholieke Universiteit Leuven)
New Designs Turing: A Fast Stream Cipher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Gregory G. Rose and Philip Hawkes (Qualcomm Australia)
Table of Contents
IX
Rabbit: A New High-Performance Stream Cipher . . . . . . . . . . . . . . . . . . . . . . 307 Martin Boesgaard, Mette Vesterager, Thomas Pedersen, Jesper Christiansen, and Ove Scavenius (CRYPTICO) Helix: Fast Encryption and Authentication in a Single Cryptographic Primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Niels Ferguson (MacFergus), Doug Whiting (HiFn), Bruce Schneier (Counterpane Internet Security), John Kelsey, Stefan Lucks (Universit¨ at Mannheim), and Tadayoshi Kohno (UCSD) PARSHA-256 – A New Parallelizable Hash Function and a Multithreaded Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Pinakpani Pal and Palash Sarkar (Indian Statistical Institute)
Modes of Operation Practical Symmetric On-Line Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard (DCSSI Crypto Lab) The Security of “One-Block-to-Many” Modes of Operation . . . . . . . . . . . . . . 376 Henri Gilbert (France T´el´ecom)
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Cryptanalysis of IDEA-X/2 H˚ avard Raddum Dep. of Informatics, The University of Bergen, Norway Abstract. IDEA is a 64-bit block cipher with a 128-bit key designed by J. Massey and X. Lai. At FSE 2002 a slightly modified version called IDEA-X was attacked using multiplicative differentials. In this paper we present a less modified version of IDEA we call IDEA-X/2, and an attack on this cipher. This attack also works on IDEA-X, and improves on the attack presented at FSE 2002. Keywords: Cryptography, block ciphers, differential cryptanalysis, IDEA.
1
Introduction
The block cipher PES (Proposed Encryption Standard) was introduced at Eurocrypt in 1990 [1]. When differential cryptanalysis [2] became known in 1991, the algorithm was changed, and renamed to IPES (Improved PES). Later the cipher has become known as IDEA (International Data Encryption Algorithm), and is today used in many cryptographic components. IDEA has been extensively cryptanalysed, but remains unbroken. We briefly mention some of this work. In 1993 2.5 rounds of IDEA was attacked with differential cryptanalysis [3]. At CRYPTO the same year, large classes of weak keys due to the simple key schedule were presented [4]. At EUROCRYPT 1997, 3and 3.5-round versions of IDEA were broken using a differential-linear attack and a truncated differential attack [5]. Larger classes of weak keys were demonstrated at EUROCRYPT 1998 [6]. At FSE 1999 impossible differentials were used to attack 4.5 rounds of IDEA [7], and at SAC 2002 attacks on IDEA for up to four rounds were improved [8]. At FSE 2002 multiplicative differentials were used to attack a slightly modified version of IDEA called IDEA-X [9]. We show in this paper that there exists a better attack for IDEA-X, and that this attack also works on a less modified version of IDEA we have chosen to call IDEA-X/2 (read as “idea x half”). The paper is organised as follows. In Section 2 we give a brief description of IDEA and its variants, in Section 3 we build the differential characteristic used to attack IDEA-X/2, in Section 4 we show how to find the subkeys used in the output transformation, and we conclude in Section 5.
2
Description of IDEA
IDEA operates on blocks of 64 bits, using a 128-bit key. The cipher consists of several applications of three group operations ⊕, and . Each operation joins T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 1–8, 2003. c International Association for Cryptologic Research 2003
2
H˚ avard Raddum
together two words of 16 bits. The operation ⊕ is bitwise XOR, is addition modulo 216 , and is multiplication modulo 216 + 1, where the all-zero word is treated as the element 216 . IDEA has eight rounds, followed by an output transformation. One round of IDEA and the output transformation is shown in the figure below.
(1)
(1)
Z1
(1)
(1)
Z2
Z4
Z3
(1)
First round
Z5
MA− structure (1)
Z6
7 additional rounds
(9)
Z2
(9)
Z3
(9)
Z4
Output transform
(9)
Z1
Fig. 1. Structure of IDEA
The security of IDEA lies in the fact that no two of the three group operations are compatible, in the sense that the distributive law does not hold. The designers have also made sure that any two contiguous group operations in IDEA are never the same. (r) Zi is subkey i used in round r, where the output transformation counts as the ninth round. Each subkey is a 16-bit word, and a total of 52 subkeys are needed. They are generated as follows. The user selects a 128-bit master key,
Cryptanalysis of IDEA-X/2
3
viewed as eight 16-bit words. The first 8 subkeys are taken as these 8 words, from left to right. Then the master key is cyclically rotated 25 positions to the left, and the resulting eight 16-bit words are taken as the next subkeys, and so on. The (1) (1) (1) (2) (2) (9) order the subkeys are taken in is Z1 , Z2 , . . . , Z6 , Z1 , . . . , Z6 , . . . , Z4 . 2.1
IDEA-X and IDEA-X/2
In [9], a variant called IDEA-X is attacked. In IDEA-X, each except for the two in the output transformation is changed to an ⊕. The authors then show that for 2112 of the keys there exists a multiplicative differential characteristic over eight rounds that holds with probability 2−32 . In this paper we consider IDEA-X/2, where we only change half of the ’s (r) (r) in one round to ⊕’s. In IDEA-X/2 only the ’s where Z2 and Z3 are inserted are changed to ⊕’s, the MA-structure is left unchanged.
3 3.1
Building a Differential Characteristic The Groups Z216 and GF(216 + 1)∗
The basis of our analysis comes from the fact that both Z216 and GF(216 + 1)∗ are cyclic groups, and therefore isomorphic (see [10]). Here we establish this isomorphism as follows. 2 Let g0 be a primitive element of GF(216 + 1)∗ , and define gi = gi−1 for 16 ∗ i = 1, . . . , 15. Then each element a in GF(2 + 1) can be written uniquely as x15 x14 g14 · · · g0x0 , a = g15
where each xi ∈ {0, 1}. For simpler notation we will write this as a = gx . Let φ be the map from GF(216 + 1)∗ to Z216 defined by φ(a) = x, where a = gx . We show that φ is an isomorphism. The identity element of GF(216 + 1)∗ is 1, and the identity element of Z216 is 0. Since 1 = g0 we have φ(1) = 0. Clearly, φ is one-to-one. Let a = gx and b = gy be two elements of GF(216 + 1)∗ . Then y15 x15 g15 · · · g0x0 g0y0 . a b = g15
If at least one of xi , yi is 0 then gixi giyi = gixi +yi , with xi + yi ∈ {0, 1}. If 1 xi = yi = 1 we get gi1 gi1 = gi+1 gi0 , that is, we get a “carry”. Note that 1 1 0 g15 = −1, so if x15 = y15 = 1 we have g15 g15 = g15 , which means the carry is shifted out of the computation. From this we see that a b = gxy , showing that φ(a b) = φ(a) φ(b), and that φ respects the group operations. This shows that φ is an isomorphism.
4
H˚ avard Raddum
φ φ φ−1
Fig. 2. Isomorphic diagrams
3.2
Differential Properties of φ
In a cryptographic setting, we may regard φ as a 16-bit S-box. The above analysis shows that a b = φ−1 (φ(a) φ(b)). In other words, the two diagrams below may be used interchangeably. We have computed the S-box φ explicitly using g0 = 3 as a primitive element, and checked its differential properties. In the first key-mixing layer in each round, (r) (r) Z1 and Z4 are mixed with two of the words using . Using the isomorphic diagram above, we may first send the keys and the two words through φ, and then combine using . In the analysis of the differential properties we should therefore let the output differences of φ be , subtraction modulo 216 . We found that if we let the input differences to φ be differences with respect to ⊕, then the following differential holds with probability 1/2: φ
δ⊕ = F F F Dx −→ δ = 215 . The difference δ is preserved through the key-addition. Through φ−1 we get the φ−1
reversed differential with probability 1/2: δ −→ δ⊕ . These may be combined (r)
Zj
(r)
into the differential δ⊕ −→ δ⊕ that, on the average over all keys Zj , holds (r)
with probability 1/4 (j ∈ {1, 4}). For each key Zj , we have checked the exact probability of this differential. The keys 1 and −1 are known to be weak under , the differential holds with probability 1 and 0.5, respectively. The smallest probability that occurs (for the keys 3 and −3 with g0 = 3) is greater than 0.166.., and the probability lies in the range 0.23 − 0.27 for 216 − 22 of the (r) possible values for Zj . 3.3
Differential Characteristic of IDEA-X/2
Let the 64-bit cipher block be denoted by (w1 , w2 , w3 , w4 ), where each wi is a 16-bit word referred to as word i.
Cryptanalysis of IDEA-X/2
5
All differences in the characteristic are with respect to ⊕, and we denote δ = F F F Dx . Let a pair of texts at the beginning of one round have difference (r) (r) (δ, δ, δ, δ). Words 2 and 3 will have difference δ after XOR with Z2 and Z3 . (r) Each of the words 1 and 4 will have difference δ after multiplication with Z1 (r) and Z4 with probability 1/4. Thus the difference after the key-mixing layer in the beginning of the round is (δ, δ, δ, δ) with probability 2−4 . Since the differences in words 1 and 3 are the same and the differences in words 2 and 4 are the same, the two input differences to the MA-structure are both 0. Then the output differences of the MA-structure will be 0, so the difference of the blocks after the XOR with the outputs from the MA-structure will be (δ, δ, δ, δ). Since words 2 and 3 have equal differences the difference of the blocks after the swap at the end of the round will also be (δ, δ, δ, δ). This one-round characteristic may be concatenated with itself 8 times to form the 8-round differential characteristic (δ, δ, δ, δ)
8
rounds −→ (δ, δ, δ, δ)
that holds with probability (2−4 )8 = 2−32 . The probability of this characteristic may be increased by a factor four as (1) (1) follows. In the first round Z1 and Z4 are inserted using . We look at the alternative diagram for this operation, containing the S-boxes φ. Then we see that the first application of φ is done to words 1 and 4 of the plaintext block, before any key-material has been inserted. This means we can select the plaintext (1) pairs such that the words 1 and 4 will have difference δ before φ(Z1 ) and (1) φ(Z4 ) are inserted, with probability 1. Then the probability of the characteristic of the first round will be 2−2 instead of 2−4 , and the overall probability of the 8-round characteristic will be 2−30 .
4
Key Recovery
We select 232 pairs of plaintext with difference (δ, δ, δ, δ), and ask for the corresponding ciphertexts. A pair of plaintexts that has followed the characteristic is called a right pair, and a pair that has not followed the characteristic is called a wrong pair. We expect to have 4 right pairs among the 232 pairs. 4.1
Filtering out Wrong Pairs
Let ci and ci be the i’th words of the ciphertexts in one pair. We compute what (9) (9) values (if any) Z2 and Z3 may have to make this pair a right pair. If this pair (9) (9) is a right pair we have (c2 Z2 ) ⊕ (c2 Z2 ) = δ. Two cases arise. (9) (9) Case 1: The second least significant bits of (c2 Z2 ) and (c2 Z2 ) are both (9) (9) 0. Since (c2 Z2 ) and (c2 Z2 ) are otherwise bitwise complementary to each (9) (9) (9) other, we have (c2 Z2 ) (c2 Z2 ) = 216 − 3. This yields 2Z2 = 3 c2 c2 ,
6
H˚ avard Raddum
which is possible only if exactly one of c2 and c2 is odd. In that case we get (9) (9) Z2 = (3 c2 c2 ) >> 1 or Z2 = ((3 c2 c2 ) >> 1) 215 . (9) (9) Case 2: The second least significant bits of (c2 Z2 ) and (c2 Z2 ) are (9) (9) (9) both 1. In this case we have (c2 Z2 ) (c2 Z2 ) = 1. This gives 2Z2 = 16 2 − 1 c2 c2 , again only possible when exactly one of c2 and c2 is odd. In that (9) (9) case we get Z2 = (216 −1c2 c2 ) >> 1 or Z2 = ((216 −1c2 c2 ) >> 1)215 . When exactly one of c2 and c2 is odd, we don’t know if we are in case 1 or (9) 2, so four values of Z2 will be suggested. The reasoning above also applies to c3 and c3 , so when exactly one of c3 and (9) c3 is odd, we will get four values of Z3 suggested. The probability that, in a random pair, exactly one of c2 and c2 is odd, and exactly one of c3 and c3 is odd is 1/4. When we filter on this condition about 230 of the pairs will remain. Next we focus on the words c1 and c1 in a pair. For the multiplication with (9) Z1 we use the alternative diagram containing the S-boxes φ and φ−1 . We have examined how the 216 pairs with input difference δ behave through φ. It turns out that 215 pairs get output difference 215 (with respect to ), and that there are 215 other possible output differences, each with a unique pair producing it. Now we go backwards through the last φ−1 and look at the difference φ(c1 ) φ(c1 ). If this difference is not one of the possible output differences of φ receiving input difference δ, we can throw away this pair as a wrong pair. When φ receives input difference δ there are 215 + 1 possible output differences, so this happens with probability 1/2. The same reasoning applies for c4 and c4 , so the probability of both words 1 and 4 surviving this test is 1/4. After performing this test we expect to be left with 228 pairs, each one with the possibility of being a right pair. 4.2
(9)
(9)
(9)
(9)
Finding the Subkey (Z1 , Z2 , Z3 , Z4 )
Each of the remaining pairs has at least one subkey that would make it a possible right pair. For each pair, these subkeys are suggested as the right subkeys. The correct subkey is suggested for each right pair, and all wrong keys are suggested more or less at random. We proceed to count how many keys each pair suggests. (9) (9) Each pair suggests 4 values of Z2 and 4 values of Z3 . These values can (9) (9) be combined in 16 different ways to produce a possible (Z2 , Z3 )-value for the (9) (9) subkey. By examining the key schedule, we find that Z2 and Z3 completely (1) determine Z4 . Letting p4 and p4 be the fourth words of the plaintexts in one (1) (1) (1) pair, we check for each of the 16 values of Z4 if (p4 Z4 ) ⊕ (p4 Z4 ) = δ. If this doesn’t hold, and the pair we are examining is a right pair, then the value (1) (9) (9) of Z4 (and hence (Z2 , Z3 )) must be wrong and can be discarded. Because of the special way we have chosen p4 and p4 (we have φ(p4 ) φ(p4 ) = 215 with probability 1), the probability of passing this test is 1/2, so we expect that 8 of (9) (9) the initial 16 possible (Z2 , Z3 )-values remain.
Cryptanalysis of IDEA-X/2 (9)
7
(9)
The number of (Z1 , Z4 )-values suggested for one pair depends on whether φ(c1 ) φ(c1 ) or φ(c4 ) φ(c4 ) is 215 . Whenever φ(c1 ) φ(c1 ) = 215 , this pair will (9) suggest 215 values of Z1 . (9) When φ(c1 ) φ(c1 ) = 215 we will get exactly one value of Z1 suggested, (9) likewise for Z4 . We expect to have four right pairs, each with difference δ in words 1 and 4 just before φ in the output transformation. The probability of getting difference 215 after φ is 1/2 for each word, so we expect that one of (9) (9) the right pairs will suggest 215 values for both Z1 and Z4 , a total of 230 (9) (9) values for (Z1 , Z4 ). The probability that a random pair after filtering has φ(c1 ) φ(c1 ) = φ(c4 ) φ(c4 ) = 215 is 2−30 , so we don’t expect any other pairs to have this property, since we are left with only 228 pairs. The probability that a random pair after filtering has φ(c1 ) φ(c1 ) = 215 is −15 2 , so we expect to find 213 pairs with this property. These pairs will suggest (9) (9) 215 values for Z1 and one value for Z4 each. The same goes for the fourth (9) (9) word, we expect 213 pairs suggesting one value for Z1 and 215 values for Z4 . (9) (9) All other pairs will suggest exactly one value for (Z1 , Z4 ). (9) (9) Each of the values suggested from one pair for (Z1 , Z4 ) must be coupled (9) (9) with the eight values for (Z2 , Z3 ), so the total number of subkeys suggested is expected to be 8(1 · 230 + 213 · 215 + 213 · 215 + (228 − 214 ) · 1) ≈ 234 . The correct subkey is expected to be suggested 4 times, and the other keys are expected to be distributed more or less at random over the other 264 possible values. It is highly unlikely that a wrong key should be suggested four times, so we take the most suggested key as the correct subkey. 4.3
Finding the Rest of the Key
By keeping track of which pairs suggest which keys, the right pairs will be revealed. The remaining 64 bits of the master key may be found by further analysis using the right pairs. Since we know the differences in these pairs at any stage of the encryption, we may start at the plaintext or ciphertext side and let these pairs suggest values for the (partially) unknown subkeys. We will not go into details here, but this strategy should work faster than searching exhaustively for the remaining 64 bits.
5
Conclusion
We have shown how to use the isomorphism between the groups Z216 and GF(216 +1)∗ as a basis for a differential attack on IDEA-X/2 that works without any conditions on the subkeys. This attack also works on IDEA-X, and gives an improvement over the attack found in [9]. This shows that the security of IDEA
8
H˚ avard Raddum (r)
depends on the fact that and not ⊕ is used when inserting the subkeys Z2 (r) and Z3 . A 4-round characteristic has been implemented, to check that theory and practice are consistent when the round keys are not independent, but generated by the key schedule. The implementation also incorporated the first round trick, bringing the probability of the differential to 2−14 . One thousand keys were generated at random, and for each key 220 pairs of plaintext were encrypted, and the number of right pairs recorded. The expected number of right pairs is 64, the actual number of right pairs produced by the keys ranged from 33 to 131. Thus the analysis (assuming independent round keys) seems to be consistent with the key schedule of IDEA.
References 1. X. Lai and J. Massey. A Proposal for a New Block Encryption Standard. Advances in Cryptology - EUROCRYPT ’90, LNCS 0473, pp. 389 - 404, Springer-Verlag 1991 2. E. Biham and A. Shamir. Differential Cryptanalysis of the Data Encryption Standard. Springer Verlag, 1993. 3. W. Meier. On the security of the IDEA block cipher. Advances in Cryptology EUROCRYPT ’93, LNCS 0765, pp. 371 - 385, Springer-Verlag 1994. 4. J. Daemen, R. Govaerts and J. Vandewalle. Weak Keys for IDEA. Advances in Cryptology - CRYPTO ’93, LNCS 0773, pp. 224 - 231, Springer-Verlag 1994. 5. J. Borst, L. Knudsen and V. Rijmen. Two Attacks on Reduced IDEA. Advances in Cryptology - EUROCRYPT ’97, LNCS 1233, pp. 1 - 13, Springer-Verlag 1997. 6. P. Hawkes. Differential-Linear Weak Key Classes of IDEA. Advances in Cryptology - EUROCRYPT ’98, LNCS 1403, pp. 112 - 126, Springer-Verlag 1998 7. E. Biham, A. Biryukov and A. Shamir. Miss in the Middle Attacks on IDEA and Khufu. Fast Software Encryption ’99, LNCS 1636, pp. 124 - 138, Springer-Verlag 1999. 8. H. Demirci. Cryptanalysis of IDEA using Exact Distributions. Selected Areas in Cryptography, preproceedings. 9. N. Borisov, M. Chew, R. Johnson and D. Wagner. Multiplicative Differentials. Fast Software Encryption 2002, LNCS 2365, pp. 17 - 33, Springer-Verlag 2002. 10. D. R. Stinson. Cryptography Theory and Practice. CRC Press 1995, p. 179.
Differential-Linear Cryptanalysis of Serpent Eli Biham1 , Orr Dunkelman1 , and Nathan Keller2 1
Computer Science Department, Technion, Haifa 32000, Israel {biham,orrd}@cs.technion.ac.il 2 Mathematics Department, Technion, Haifa 32000, Israel
[email protected] Abstract. Serpent is a 128-bit SP-Network block cipher consisting of 32 rounds with variable key length (up to 256 bits long). It was selected as one of the 5 AES finalists. The best known attack so far is a linear attack on an 11-round reduced variant. In this paper we apply the enhanced differential-linear cryptanalysis to Serpent. The resulting attack is the best known attack on 11-round Serpent. It requires 2125.3 chosen plaintexts and has time complexity of 2139.2 . We also present the first known attack on 10-round 128-bit key Serpent. These attacks demonstrate the strength of the enhanced differential-linear cryptanalysis technique.
1
Introduction
Serpent [1] is one of the 5 AES [13] finalists. It has a 128-bit block size and accepts key sizes of any length between 0 and 256 bits. Serpent is an SP-Network with 32 rounds and 4-bit to 4-bit S-boxes. Since its introduction in 1997, Serpent has withstood a great deal of cryptanalytic efforts. In [8] a modified variant of Serpent in which the linear transformation of the round function was modified into a permutation was analyzed. The change weakens Serpent, as this change allows one active S-box to activate only one S-box in the consecutive round. In Serpent, this is impossible, as one active S-box leads to at least two active S-boxes in the following round. The analysis of the modified variant presents an attack against up to 35 rounds of the cipher. In [9] a 256-bit key variant of 9-round Serpent1 is attacked using the amplified boomerang attack. The attack uses two short differentials – one for rounds 1–4 and one for rounds 5–7. These two differentials are combined to construct a 7round amplified boomerang distinguisher, which is then used to mount a key recovery attack on 9-round Serpent. The attack requires 2110 chosen plaintexts and its time complexity is 2252 9-round Serpent encryptions. In [4] the rectangle attack is applied to attack 256-bit key 10-round Serpent. The attack is based on an 8-round distinguisher. The distinguisher treats those 8 rounds as composed of two sub-ciphers: rounds 1–4 and rounds 5–8. In each 1
The work described in this paper has been supported by the European Commission through the IST Programme under Contract IST-1999-12324. We use n-round Serpent when we mean a reduced version of Serpent with n rounds.
T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 9–21, 2003. c International Association for Cryptologic Research 2003
10
Eli Biham, Orr Dunkelman, and Nathan Keller
sub-cipher the attack exploits many differentials. These 4-round differentials are combined to create an 8-round rectangle distinguisher. The attack requires 2126.8 chosen plaintexts and 2217 memory accesses2 which are equivalent to 2208.8 10round Serpent encryptions3 . The 10-round rectangle attack was improved in [6] and the improved attack requires 2126.3 chosen plaintexts, with time complexity of 2173.8 memory accesses (2165 10-round Serpent encryptions). Thus, using the rectangle attack, it is also possible to attack 192-bit key 10-round Serpent. A similar boomerang attack, which requires almost the entire code book is also presented in [6]. The best known attack so far against Serpent can attack up to 11 rounds. The attack [5] is based on linear cryptanalysis [11]. It requires data complexity of 2118 known plaintexts and time complexity of 2214 memory accesses (2205.7 11-round Serpent encryptions). In this paper we combine the differential and the linear results on Serpent to present an attack on 11-round Serpent which has a significantly lower time complexity. The attack is based on the differential-linear technique [10]. The technique was later enhanced and improved in [7]. This technique combines a differential characteristic (or several differential characteristics) together with a linear approximation to construct a chosen plaintext distinguisher. This result sheds more light on the applicability and the power of the enhanced differentiallinear technique. The data complexity of our attack is 2125.3 chosen plaintexts and the time complexity is about 2139.2 11-round Serpent encryptions. Therefore, the attack is faster than exhaustive search even for 192-bit keys 11-round Serpent. We use the same techniques to present a 10-round attack on Serpent that requires 2107.2 chosen plaintexts and 2125.2 10-round Serpent encryptions. This is the first known attack on a 128-bit key 10-round Serpent faster than exhaustive search. We organize this paper as follows: In Section 2 we give the basic description of Serpent. In Section 3 we briefly describe the differential-linear technique. In Section 4 we present the differential-linear attack on 11-round Serpent and on 10-round Serpent. We summarize our results and compare them with previous results on Serpent in Section 5. In the appendices we describe the differential characteristic and the linear approximation which are used in the attacks.
2
A Description of Serpent
In [1] Anderson, Biham and Knudsen presented the block cipher Serpent. It has a block size of 128 bits and accepts 0–256 bit keys. Serpent is an SP-network block cipher with 32 rounds. Each round is composed of key mixing, a layer of S-boxes and a linear transformation. There is an equivalent bitsliced description which makes the cipher more efficient, and easier to describe. 2 3
In [4] a different number is quoted, but in [6] this mistake was mentioned, and the true time complexity of the algorithm was computed. The conversion was done according to the best performance figures, presented in [12], assuming one memory access is equivalent to 3 cycles.
Differential-Linear Cryptanalysis of Serpent
11
In our description we adopt the notations of [1] in the bitsliced version. The ˆi (which is a 128-bit value). intermediate value of the round i is denoted by B ˆ The rounds are numbered from 0 to 31. Each Bi is composed of four 32-bit words X 0 , X1 , X2 , X3 . Serpent has 32 rounds, and a set of eight 4-bit to 4-bit S-boxes. Each round function Ri (i ∈ {0, . . . , 31}) uses a single S-box 32 times in parallel. For example, R0 uses S0 , 32 copies of which are applied in parallel. Thus, the first copy of S0 takes the least significant bits from X0 , X1 , X2 , X3 and returns the output to the same bits. This can be implemented as a boolean expression of the 4 words. The set of eight S-boxes is used four times. S0 is used in round 0, S1 is used in round 1, etc. After using S7 in round 7 we use S0 again in round 8, then S1 in round 9, and so on. In the last round (round 31) the linear transformation is omitted and another key is XORed. The cipher may be formally described by the following equations: ˆ0 := P B ˆi+1 := Ri (B ˆi ) B ˆ32 C := B
i = 0, . . . , 31
where ˆ i )) Ri (X) = LT (Sˆi (X ⊕ K ˆ i) ⊕ K ˆ 32 Ri (X) = Sˆi (X ⊕ K
i = 0, . . . , 30 i = 31
where Sˆi is the application of the S-box S(i mod 8) thirty two times in parallel, and LT is the linear transformation. ˆi ), they are linearly Given the four 32-bit words X0 , X1 , X2 , X3 := Sˆi (Bˆi ⊕ K mixed by the following linear transformation: X0 := X0 0, then XCBC computes exactly the same as the CBC MAC, except for XORing an n-bit key K2 before encrypting the last block. – Otherwise, 10i padding (i = n−1−|M | mod n) is appended to M and XCBC computes exactly the same as the CBC MAC for the padded message, except for XORing another n-bit key K3 before encrypting the last block. However, drawback of XCBC is that it requires three keys, (k + 2n) bits in total. Finally Kurosawa and Iwata proposed Two-key CBC MAC (TMAC) [9]. TMAC takes two keys, (k + n) bits in total: a block cipher key K1 and an n-bit key K2 . TMAC is obtained from XCBC by replacing (K2 , K3 ) with (K2 · u, K2 ), where u is some non-zero constant and “·” denotes multiplication in GF(2n ). 1.2
Our Contribution
In this paper, we present One-key CBC MAC (OMAC) and prove its security for arbitrary length messages. OMAC takes only one key, K of a block cipher E. The key length, k bits, is the minimum because the underlying block cipher must have a k-bit key K anyway. See Table 1 for a comparison with XCBC and TMAC (See Appendix A for a detailed comparison). Table 1. Comparison of key length. XCBC [3] TMAC [9] OMAC (This paper) key length (k + 2n) bits (k + n) bits k bits
OMAC is a generic name for OMAC1 and OMAC2. OMAC1 is obtained from XCBC by replacing (K2 , K3 ) with (L · u, L · u2 ) for some non-zero constant u in GF(2n ), where L is given by L = EK (0n ) . OMAC2 is similarly obtained by using (L · u, L · u−1 ). We can compute L · u, L · u−1 and L · u2 = (L · u) · u efficiently by one shift and one conditional XOR from L, L and L · u, respectively. OMAC1 (resp. OMAC2) is described as follows (see Fig. 2).
OMAC: One-Key CBC MAC M [1]
M [2]
?
-? f ? K- E
K- E
M [3]
M [1]
M [2]
-? fL · u ? ? K- E K- E ?
-? f ? K- E
131
M [3] 10i
-? fL · u2 ? K- E ?
T T Fig. 2. Illustration of OMAC1. Note that L = EK (0n ). OMAC2 is obtained by replacing L · u2 with L · u−1 in the right figure.
– If |M | = mn for some m > 0, then OMAC computes exactly the same as the CBC MAC, except for XORing L · u before encrypting the last block. – Otherwise, 10i padding (i = n−1−|M | mod n) is appended to M and OMAC computes exactly the same as the CBC MAC for the padded message, except for XORing L · u2 (resp. L · u−1 ) before encrypting the last block. Note that in TMAC, K2 is a part of the key while in OMAC, L is not a part of the key and is generated from K. This saving of the key length makes the security proof of OMAC substantially harder than that of TMAC, as shown below. In Fig. 2, suppose that M [1] = 0n . Then the output of the first EK is L. The same L always appears again at the last block. In general, such reuse of L would get one into trouble in the security proof. (In OCB mode [13] and PMAC [5], L = EK (0n ) is also used as a key of a universal hash function. However, L appears as an output of some internal block cipher only with negligible probability.) Nevertheless we prove that OMAC is as secure as XCBC, where the security analysis is in the concrete-security paradigm [1]. Further OMAC has all other nice properties which XCBC (and TMAC) has. That is, the domain of OMAC is {0, 1}∗ , it requires one key scheduling of the underlying block cipher E and max{1, |M |/n} block cipher invocations. 1.3
Other Related Work
Jaulmes, Joux and Valette proposed RMAC [8] which is an extension of EMAC. RMAC encrypts the CBC MAC value with K2 ⊕ R, where R is an n-bit random string and it is a part of the tag. That is, RMACK1 ,K2 (M ) = (EK2 ⊕R (CBCK1 (M )), R) . They showed that the security of RMAC is beyond the birthday paradox limit. (XCBC, TMAC and OMAC are secure up to the birthday paradox limit.)
2 2.1
Preliminaries Notation R
We use similar notation as in [13, 5]. For a set A, x ← A means that x is chosen from A uniformly at random. If a, b ∈ {0, 1}∗ are equal-length strings
132
Tetsu Iwata and Kaoru Kurosawa
then a ⊕ b is their bitwise XOR. If a, b ∈ {0, 1}∗ are strings then a ◦ b denote their concatenation. For simplicity, we sometimes write ab for a ◦ b if there is no confusion. For an n-bit string a = an−1 · · · a1 a0 ∈ {0, 1}n , let a < < 1 = an−2 · · · a1 a0 0 denote the n-bit string which is a left shift of a by 1 bit, while a > > 1 = 0an−1 · · · a2 a1 denote the n-bit string which is a right shift of a by 1 bit. If a ∈ {0, 1}∗ is a string then |a| denotes its length in bits. For any bit string a ∈ {0, 1}∗ such that |a| ≤ n, we let a10n−|a|−1 if |a| < n, padn (a) = (1) a if |a| = n. Define an = max{1, |a|/n}, where the empty string counts as one block. In pseudocode, we write “Partition M into M [1] · · · M [m]” as shorthand for “Let m = M n , and let M [1], . . . , M [m] be bit strings such that M [1] · · · M [m] = M and |M [i]| = n for 1 ≤ i < m.” 2.2
CBC MAC
The block cipher E is a function E : KE × {0, 1}n → {0, 1}n , where each E(K, ·) = EK (·) is a permutation on {0, 1}n , KE is the set of possible keys and n is the block length. The CBC MAC [6, 7] is the simplest and most well-known algorithm to make a MAC from a block cipher E. Let M = M [1] ◦ M [2] ◦ · · · ◦ M [m] be a message string, where |M [1]| = |M [2]| = · · · = |M [m]| = n. Then CBCK (M ), the CBC MAC of M under key K, is defined as Y [m], where Y [i] = EK (M [i] ⊕ Y [i − 1]) for i = 1, . . . , m and Y [0] = 0n . Bellare, Kilian and Rogaway proved the security of the CBC MAC for fixed message length mn bits [1]. 2.3
The Field with 2n Points
We interchangeably think of a point a in GF(2n ) in any of the following ways: (1) as an abstract point in a field; (2) as an n-bit string an−1 · · · a1 a0 ∈ {0, 1}n ; (3) as a formal polynomial a(u) = an−1 un−1 + · · · + a1 u + a0 with binary coefficients. To add two points in GF(2n ), take their bitwise XOR. We denote this operation by a ⊕ b. To multiply two points, fix some irreducible polynomial f (u) having binary coefficients and degree n. To be concrete, choose the lexicographically first polynomial among the irreducible degree n polynomials having a minimum number of coefficients. We list some indicated polynomials (See [10, Chapter 10] for other polynomials). for n = 64, f (u) = u64 + u4 + u3 + u + 1 f (u) = u128 + u7 + u2 + u + 1 for n = 128, and f (u) = u256 + u10 + u5 + u2 + 1 for n = 256.
OMAC: One-Key CBC MAC
133
To multiply two points a ∈ GF(2n ) and b ∈ GF(2n ), regard a and b as polynomials a(u) = an−1 un−1 + · · · + a1 u + a0 and b(u) = bn−1 un−1 + · · · + b1 u + b0 , form their product c(u) where one adds and multiplies coefficients in GF(2), and take the remainder when dividing c(u) by f (u). Note that it is particularly easy to multiply a point a ∈ {0, 1}n by u. For example, if n = 128, a< >1 if a0 = 0, a · u−1 = (3) (a > > 1) ⊕ 10120 1000011 otherwise.
3
Basic Construction
In this section, we show a basic construction of OMAC-family. OMAC-family is defined by a block cipher E : KE × {0, 1}n → {0, 1}n , an n-bit constant Cst, a universal hash function H : {0, 1}n × X → {0, 1}n , and two distinct constants Cst1 , Cst2 ∈ X, where X is the finite domain of H. H, Cst1 and Cst2 must satisfy the following conditions while Cst is arbitrary. We write HL (·) for H(L, ·). 1. For any y ∈ {0, 1}n , the number of L ∈ {0, 1}n such that HL (Cst1 ) = y is at most 1 · 2n for some sufficiently small 1 . 2. For any y ∈ {0, 1}n , the number of L ∈ {0, 1}n such that HL (Cst2 ) = y is at most 2 · 2n for some sufficiently small 2 . 3. For any y ∈ {0, 1}n , the number of L ∈ {0, 1}n such that HL (Cst1 ) ⊕ HL (Cst2 ) = y is at most 3 · 2n for some sufficiently small 3 . 4. For any y ∈ {0, 1}n , the number of L ∈ {0, 1}n such that HL (Cst1 ) ⊕ L = y is at most 4 · 2n for some sufficiently small 4 . 5. For any y ∈ {0, 1}n , the number of L ∈ {0, 1}n such that HL (Cst2 ) ⊕ L = y is at most 5 · 2n for some sufficiently small 5 . 6. For any y ∈ {0, 1}n , the number of L ∈ {0, 1}n such that HL (Cst1 ) ⊕ HL (Cst2 ) ⊕ L = y is at most 6 · 2n for some sufficiently small 6 . Remark 1. Property 1 and 2 says that HL (Cst1 ) and HL (Cst2 ) are almost uniformly distributed. Property 3 is satisfied by AXU (almost XOR universal) hash functions [12]. Property 4, 5, 6 are new requirements introduced here. The algorithm of OMAC-family is described in Fig. 3 and illustrated in Fig. 4, where padn (·) is defined in (1). The key space K of OMAC-family is K = KE . It takes a key K ∈ KE and a message M ∈ {0, 1}∗ , and returns a string in {0, 1}n .
134
Tetsu Iwata and Kaoru Kurosawa Algorithm OMAC-familyK (M ) L ← EK (Cst) Y [0] ← 0n Partition M into M [1] · · · M [m] for i ← 1 to m − 1 do X[i] ← M [i] ⊕ Y [i − 1] Y [i] ← EK (X[i]) X[m] ← padn (M [m]) ⊕ Y [m − 1] if |M [m]| = n then X[m] ← X[m] ⊕ HL (Cst1 ) else X[m] ← X[m] ⊕ HL (Cst2 ) T ← EK (X[m]) return T Fig. 3. Definition of OMAC-family.
M [1]
M [2]
?
-? f ? K- E
K- E
M [3]
M [1]
M [2]
? -f HL (Cst1 ) ? ? K- E K- E ?
-? f ? K- E
T Fig. 4. Illustration of OMAC-family.
4
M [3] 10i
-? fHL (Cst2 ) ? K- E ? T
Proposed Specification
In this section, we present two specifications of OMAC-family: OMAC1 and OMAC2. We use OMAC as a generic name for OMAC1 and OMAC2. In OMAC1 we let Cst = 0n , HL (x) = L·x, Cst1 = u and Cst2 = u2 , where “·” denotes multiplication over GF(2n ). Equivalently, L = EK (0n ), HL (Cst1 ) = L·u and HL (Cst2 ) = L · u2 . OMAC2 is the same as OMAC1 except for Cst2 = u−1 instead of Cst2 = u2 . Equivalently, L = EK (0n ), HL (Cst1 ) = L · u and HL (Cst2 ) = L · u−1 . Note that L · u, L · u−1 and L · u2 = (L · u) · u can be computed efficiently by one shift and one conditional XOR from L, L and L · u, respectively as shown in (2) and (3). It is easy to see that the conditions in Sec. 3 are satisfied for 1 = · · · = 6 = 2−n in OMAC1 and OMAC2. OMAC1 and OMAC2 are described in Fig. 5 and illustrated in Fig. 2.
5 5.1
Security of OMAC-Family Security Definitions
Let Perm(n) denote the set of all permutations on {0, 1}n . We say that P is a random permutation if P is randomly chosen from Perm(n). The security of a block cipher E can be quantified as Advprp E (t, q), the maximum advantage that an adversary A can obtain when trying to distinguish
OMAC: One-Key CBC MAC Algorithm OMAC1K (M ) L ← EK (0n ) Y [0] ← 0n Partition M into M [1] · · · M [m] for i ← 1 to m − 1 do X[i] ← M [i] ⊕ Y [i − 1] Y [i] ← EK (X[i]) X[m] ← padn (M [m]) ⊕ Y [m − 1] if |M [m]| = n then X[m] ← X[m] ⊕ L · u else X[m] ← X[m] ⊕ L · u2 T ← EK (X[m]) return T
135
Algorithm OMAC2K (M ) L ← EK (0n ) Y [0] ← 0n Partition M into M [1] · · · M [m] for i ← 1 to m − 1 do X[i] ← M [i] ⊕ Y [i − 1] Y [i] ← EK (X[i]) X[m] ← padn (M [m]) ⊕ Y [m − 1] if |M [m]| = n then X[m] ← X[m] ⊕ L · u else X[m] ← X[m] ⊕ L · u−1 T ← EK (X[m]) return T
Fig. 5. Description of OMAC1 and OMAC2.
EK (·) (with a randomly chosen key K) from a random permutation P (·), when allowed computation time t and q queries to an oracle (which is either EK (·) or P (·)). This advantage is defined as follows. def R R EK (·) P (·) Advprp (A) = ← K : A = 1) − Pr(P ← Perm(n) : A = 1) Pr(K E E Advprp (t, q) def = max {Advprp E E (A)} A
We say that a block cipher E is secure if Advprp E (t, q) is sufficiently small. Similarly, a MAC algorithm is a map F : KF × {0, 1}∗ → {0, 1}n , where KF is a set of keys and we write FK (·) for F (K, ·). We say that an adversary AFK (·) forges if A outputs (M, FK (M )) where A never queried M to its oracle FK (·). Then we define the advantage as def R FK (·) Advmac forges) F (A) = Pr(K ← KF : A def mac Advmac F (t, q, µ) = max {AdvF (A)} A
where the maximum is over all adversaries who run in time at most t, make at most q queries, and each query is at most µ bits. We say that a MAC algorithm is secure if Advmac F (t, q, µ) is sufficiently small. Let Rand(∗, n) denote the set of all functions from {0, 1}∗ to {0, 1}n . This set is given a probability measure by asserting that a random element R of Rand(∗, n) associates to each string M ∈ {0, 1}∗ a random string R(M ) ∈ {0, 1}n . Then we define the advantage as def R R FK (·) R(·) Advviprf (A) = ← K : A = 1) − Pr(R ← Rand(∗, n) : A = 1) Pr(K F F
Advviprf (t, q, µ) def = max Advviprf F F (A) A
where the maximum is over all adversaries who run in time at most t, make at most q queries, and each query is at most µ bits. We say that a MAC algorithm
136
Tetsu Iwata and Kaoru Kurosawa
is pseudorandom if Advviprf F (t, q, µ) is sufficiently small (viprf stands for Variablelength Input PseudoRandom Function). Without loss of generality, adversaries are assumed to never ask a query outside the domain of the oracle, and to never repeat a query. 5.2
Theorem Statements
We first prove that OMAC-family is pseudorandom if the underlying block cipher is a random permutation P (information-theoretic result). This proof is much harder than the previous works because of the reuse of L as explained Sec. 1.2. Lemma 1 (Main Lemma for OMAC-Family). Suppose that H, Cst1 and Cst2 satisfy the conditions in Sec. 3 for some sufficiently small 1 , . . . , 6 , and let Cst be an arbitrarily n-bit constant. Suppose that a random permutation P ∈ Perm(n) is used in OMAC-family as the underlying block cipher. Let A be an adversary which asks at most q queries, and each query is at most nm bits (m is the maximum number of blocks in each query). Assume m ≤ 2n /4. Then R Pr(P ← Perm(n) : AOMAC-familyP (·) = 1)
q 2 7m2 + 2 (4) R R(·) 2 · − Pr(R ← Rand(∗, n) : A = 1) ≤ + 3m , 2 2n where = max{1 , . . . , 6 }. A proof is given in the next section. The following results hold for both OMAC1 and OMAC2. First, we obtain the following lemma by substituting = 2−n in Lemma 1. Lemma 2 (Main Lemma for OMAC). Suppose that a random permutation P ∈ Perm(n) is used in OMAC as the underlying block cipher. Let A be an adversary which asks at most q queries, and each query is at most nm bits. Assume m ≤ 2n /4. Then R Pr(P ← Perm(n) : AOMACP (·) = 1) (5m2 + 1)q 2 R − Pr(R ← Rand(∗, n) : AR(·) = 1) ≤ . 2n We next show that OMAC is pseudorandom if the underlying block cipher E is secure. It is standard to pass to this complexity-theoretic result from Lemma 2. (For example, see [1, Section 3.2] for the proof technique. In [1, Section 3.2], it is shown that a complexity-theoretic advantage of the CBC MAC is obtained from its information-theoretic advantage.) Corollary 1. Let E : KE × {0, 1}n → {0, 1}n be the underlying block cipher used in OMAC. Then Advviprf OMAC (t, q, nm) ≤
(5m2 + 1)q 2 + Advprp E (t , q ) , 2n
where t = t + O(mq) and q = mq + 1.
OMAC: One-Key CBC MAC x
x
?
? ? ? fRnd ? fRnd fRnd fHL (Cst1 ) ? fHL (Cst2 ) ? ? ⊕HL (Cst1 ) ? ⊕HL (Cst2 ) ? ?
P
P
x
P
? fRnd ? fRnd ? ? ?
Q1 (x)
Q2 (x)
x
Q3 (x)
P
?
Q4 (x)
x
137
P
?
Q5 (x)
x
P
?
Q6 (x)
Fig. 6. Illustrations of Q1 , Q2 Q3 , Q4 , Q5 and Q6 . Note that L = P (Cst).
Finally we show that OMAC is secure as a MAC algorithm from Corollary 1 in the usual way. (For example, see [1, Proposition 2.7] for the proof technique. In [1, Proposition 2.7], it is shown that pseudorandom functions are secure MACs.) Theorem 1. Let E : KE × {0, 1}n → {0, 1}n be the underlying block cipher used in OMAC. Then Advmac OMAC (t, q, nm) ≤
(5m2 + 1)q 2 + 1 + Advprp E (t , q ) , 2n
where t = t + O(mq) and q = mq + 1. 5.3
Proof of Main Lemma for OMAC-Family
Let H, Cst1 and Cst2 satisfy the conditions in Sec. 3 for some sufficiently small 1 , . . . , 6 , and Cst be an arbitrarily n-bit constant. For a random permutation P ∈ Perm(n) and a random n-bit string Rnd ∈ {0, 1}n , define def def Q2 (x) = P (x ⊕ Rnd) ⊕ Rnd, Q1 (x) = P (x) ⊕ Rnd, def def Q3 (x) = P (x ⊕ Rnd ⊕ HL (Cst1 )), Q4 (x) = P (x ⊕ Rnd ⊕ HL (Cst2 )), def def Q5 (x) = P (x ⊕ HL (Cst1 )) and Q6 (x) = P (x ⊕ HL (Cst2 )),
(5)
where L = P (Cst). See Fig. 6 for illustrations. We first show that Q1 (·), Q2 (·), Q3 (·), Q4 (·), Q5 (·), Q6 (·) are indistinguishable from a pair of six independent random permutations P1 (·), P2 (·), P3 (·), P4 (·), P5 (·), P6 (·). Lemma 3. Let A be an adversary which asks at most q queries in total. Then R R Pr(P ← Perm(n); Rnd ← {0, 1}n : AQ1 (·),...,Q6 (·) = 1)
3q 2 1 R P1 (·),...,P6 (·) · − Pr(P1 , . . . , P6 ← Perm(n) : A = 1) ≤ + , 2 2n where = max{1 , . . . , 6 }. A proof is given in Appendix B.
138
Tetsu Iwata and Kaoru Kurosawa Algorithm MOMACP1 ,P2 ,P3 ,P4 ,P5 ,P6 (M ) Partition M into M [1] · · · M [m] if m ≥ 2 then X[1] ← M [1] Y [1] ← P1 (X[1]) for i ← 2 to m − 1 do X[i] ← M [i] ⊕ Y [i − 1] Y [i] ← P2 (X[i]) X[m] ← padn (M [m]) ⊕ Y [m − 1] if |M [m]| = n then T ← P3 (X[m]) else T ← P4 (X[m]) if m = 1 then X[m] ← padn (M [m]) if |M [m]| = n then T ← P5 (X[m]) else T ← P6 (X[m]) return T Fig. 7. Definition of MOMAC. M [1]
M [2]
M [3]
?
-? f ?
-? f ?
P1
P2
M [1]
M [2]
?
-? f ?
P3
P1
?
P2
T Fig. 8. Illustration of MOMAC for |M | > n. M
?
M
P5
?
M [3] 10i
-? f ? P4
?
T
10i
?
P6
?
T T Fig. 9. Illustration of MOMAC for |M | ≤ n.
Next we define MOMAC (Modified OMAC). It uses six independent random permutations P1 , P2 , P3 , P4 , P5 , P6 ∈ Perm(n). The algorithm MOMACP1 ,...,P6 (·) is described in Fig. 7 and illustrated in Fig. 8 and Fig. 9. We prove that MOMAC is pseudorandom. Lemma 4. Let A be an adversary which asks at most q queries, and each query is at most nm bits. Assume m ≤ 2n /4. Then R Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1) (2m2 + 1)q 2 R − Pr(R ← Rand(∗, n) : AR(·) = 1) ≤ . 2n A proof is given in Appendix C.
OMAC: One-Key CBC MAC
139
O1 ,...,O6 Algorithm BA
1: When A asks its r-th query M (r) : 2:
T (r) ← MOMACO1 ,...,O6 (M (r) )
3:
return T (r)
4: When A halts and outputs b: 5:
output b
Fig. 10. Algorithm BA . Note that for 1 ≤ i ≤ 6, Oi is either Pi or Qi .
The next lemma shows that OMAC-familyP (·) and MOMACP1 ,...,P6 (·) are indistinguishable. Lemma 5. Let A be an adversary which asks at most q queries, and each query is at most nm bits. Assume m ≤ 2n /4. Then R Pr(P ← Perm(n) : AOMAC-familyP (·) = 1)
3m2 q 2 1 R · − Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1) ≤ + . 2 2n Proof. Suppose that there exists an adversary A such that R Pr(P ← Perm(n) : AOMAC-familyP (·) = 1)
3m2 q 2 1 R · − Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1) > + . 2 2n By using A, we show a construction of an adversary BA such that: – BA asks at most mq queries, and R Q (·),...,Q6 (·) = 1) – Pr(P ← Perm(n) : BA1 R
− Pr(P1 , . . . , P6 ← Perm(n) :
P (·),...,P6 (·) BA1
3m2 q 2 1 · = 1) > + , 2 2n
which contradicts Lemma 3. Let O1 (·), . . . , O6 (·) be BA ’s oracles. The construction of BA is given in Fig. 10. When A asks M (r) , then BA computes T (r) = MOMACO1 ,...,O6 (M (r) ) as if the underlying random permutations are O1 , . . . , O6 , and returns T (r) . When A halts and outputs b, then BA outputs b. Now we see that: – BA asks at most mq queries to its oracles, since A asks at most q queries, and each query is at most nm bits. R P (·),...,P6 (·) = 1) – Pr(P1 , . . . , P6 ← Perm(n) : BA1 R
= Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1), since BA gives A a perfect simulation of MOMACP1 ,...,P6 (·) if Oi (·) = Pi (·) for 1 ≤ i ≤ 6.
140
Tetsu Iwata and Kaoru Kurosawa M [1]
M [2]
? Rnd- f
M [1]
M [2]
P
P
? Rnd- f
P
P
? Rnd- f
?
M [3] 10i
-? fRnd - ? fRnd ? ? ⊕HL (Cst2 )
-? fRnd - ? fRnd ? ? ⊕HL (Cst1 ) ?
?
P
M [3]
? Rnd- f
P
?
T T Fig. 11. Computation of BA when Oi = Qi for 1 ≤ i ≤ 6, and |M | > n. M
M
? fHL (Cst1 ) ?
P
?
T
Fig. 12. Computation of BA R
P
?
T when Oi = Qi for 1 ≤ i ≤ 6, and |M | ≤ n.
Q (·),...,Q6 (·)
– Pr(P ← Perm(n) : BA1
10i
? fHL (Cst2 ) ?
= 1)
R
= Pr(P ← Perm(n) : AOMACP (·) = 1), since BA gives A a perfect simulation of OMACP (·) if Oi (·) = Qi (·) for 1 ≤ i ≤ 6. See Fig. 11 and Fig. 12. Note that Rnd is canceled in Fig. 11. This concludes the proof of the lemma.
We finally give a proof of Main Lemma for OMAC-family. Proof (of Lemma 1). By the triangle inequality, the left hand side of (4) is at most R Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1) (6) R − Pr(R ← Rand(∗, n) : AR(·) = 1) R + Pr(P ← Perm(n) : AOMAC-familyP (·) = 1) (7) R − Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1) . Lemma 4 gives us an upper bound on (6) and Lemma 5 gives us an upper bound on (7). Therefore the bound follows since 2
(2m2 + 1)q 2 1 7m + 2 3m2 q 2 q2 2 · · + + = + 3m . 2n 2 2n 2 2n This concludes the proof of the lemma.
Acknowledgement The authors would like to thank Phillip Rogaway of UC Davis for useful comments.
OMAC: One-Key CBC MAC
141
References 1. M. Bellare, J. Kilian, and P. Rogaway. The security of the cipher block chaining message authentication code. JCSS, vol. 61, no. 3, 2000. Earlier version in Advances in Cryptology — CRYPTO ’94, LNCS 839, pp. 341–358, Springer-Verlag, 1994. 2. A. Berendschot, B. den Boer, J. P. Boly, A. Bosselaers, J. Brandt, D. Chaum, I. Damg˚ ard, M. Dichtl, W. Fumy, M. van der Ham, C. J. A. Jansen, P. Landrock, B. Preneel, G. Roelofsen, P. de Rooij, and J. Vandewalle. Final Report of RACE Integrity Primitives. LNCS 1007, Springer-Verlag, 1995. 3. J. Black and P. Rogaway. CBC MACs for arbitrary-length messages: The three key constructions. Advances in Cryptology — CRYPTO 2000, LNCS 1880, pp. 197–215, Springer-Verlag, 2000. 4. J. Black and P. Rogaway. Comments to NIST concerning AES modes of operations: A suggestion for handling arbitrary-length messages with the CBC MAC. Second Modes of Operation Workshop. Available at http://www.cs.ucdavis.edu/˜rogaway/. 5. J. Black and P. Rogaway. A block-cipher mode of operation for parallelizable message authentication. Advances in Cryptology — EUROCRYPT 2002, LNCS 2332, pp. 384–397, Springer-Verlag, 2002. 6. FIPS 113. Computer data authentication. Federal Information Processing Standards Publication 113, U. S. Department of Commerce / National Bureau of Standards, National Technical Information Service, Springfield, Virginia, 1994. 7. ISO/IEC 9797-1. Information technology — security techniques — data integrity mechanism using a cryptographic check function employing a block cipher algorithm. International Organization for Standards, Geneva, Switzerland, 1999. Second edition. ´ Jaulmes, A. Joux, and F. Valette. On the security of randomized CBC-MAC 8. E. beyond the birthday paradox limit: A new construction. Fast Software Encryption, FSE 2002, LNCS 2365, pp. 237–251, Springer-Verlag, 2002. Full version is available at Cryptology ePrint Archive, Report 2001/074, http://eprint.iacr.org/. 9. K. Kurosawa and T. Iwata. TMAC: Two-Key CBC MAC. Topics in Cryptology — CT-RSA 2003, LNCS 2612, pp. 33–49, Springer-Verlag, 2003. See also Cryptology ePrint Archive, Report 2002/092, http://eprint.iacr.org/. 10. R. Lidl and H. Niederreiter. Introduction to finite fields and their applications, revised edition. Cambridge University Press, 1994. 11. E. Petrank and C. Rackoff. CBC MAC for real-time data sources. J.Cryptology, vol. 13, no. 3, pp. 315–338, Springer-Verlag, 2000. 12. P. Rogaway. Bucket hashing and its application to fast message authentication. Advances in Cryptology — CRYPTO ’95, LNCS 963, pp. 29–42, Springer-Verlag, 1995. 13. P. Rogaway, M. Bellare, J. Black, and T. Krovetz. OCB: a block-cipher mode of operation for efficient authenticated encryption. Proceedings of ACM Conference on Computer and Communications Security, ACM CCS 2001, ACM, 2001. 14. S. Vaudenay. Decorrelation over infinite domains: The encrypted CBC-MAC case. Communications in Information and Systems (CIS), vol. 1, pp. 75–85, 2001. Earlier version in Selected Areas in Cryptography, SAC 2000, LNCS 2012, pp. 57–71, Springer-Verlag, 2001.
142
A A.1
Tetsu Iwata and Kaoru Kurosawa
Discussions Design Rationale
Our choice for OMAC1 is Cst = 0n , HL (x) = L · x, Cst1 = u and Cst2 = u2 , where “·” denotes multiplication over GF(2n ). Similarly, our choice for OMAC2 is Cst = 0n , HL (x) = L · x, Cst1 = u and Cst2 = u−1 . Below, we list reasons of this choice. – One might try to use Cst1 = 1 instead of Cst1 = u. In this case, the fourth condition in Sec. 3 is not satisfied, and in fact, the scheme can be easily attacked. Similarly, if one uses Cst2 = 1 instead of Cst2 = u2 or Cst2 = u−1 , the fifth condition in Sec. 3 is not satisfied, and the scheme can be easily attacked. Therefore, we can not use “1” as a constant. – For OMAC1, we adopted u and u2 as Cst1 and Cst2 , since L · u and L · u2 = (L · u) · u can be computed efficiently by one left shift and one conditional XOR from L and L · u, respectively, as shown in (2). Note that this choice requires only a left shift. This would ease the implementation of OMAC1, especially in hardware. – For OMAC2, we adopted u−1 instead of u2 as Cst2 . It requires one right shift to compute L · u−1 instead of one left shift to compute (L · u) · u. This would allow to compute both L · u and L · u−1 from L simultaneously if both left shift and right shift are available (for example, the underlying block cipher uses both shifts). A.2
On Standard Key Separation Technique
For XCBC, assume that we want to use a single key K of E, where E is the AES. Then the following key separation technique is suggested in [4]. Let K be a k-bit AES key. Then K1 = the first k bits of AESK (C1a ) ◦ AESK (C1b ), K2 = AESK (C2 ), and K3 = AESK (C3 ) for some distinct constants C1a , C1b , C2 and C3 . We call it XCBC+kst (key separation technique). XCBC+kst uses one k-bit key. However, it requires additional one key scheduling of AES and additional 3 or 4 AES invocations during the pre-processing time. Similar discussion can be applied to TMAC. For example, we can let K1 = the first k bits of AESK (C1a ) ◦ AESK (C1b ), and K2 = AESK (C2 ) for some distinct constants C1a , C1b and C2 . We call it TMAC+kst. We note that OMAC does not need such a key separation technique since its key length is k bits in its own form (without using any key separation technique). This saves storage space and pre-processing time compared to XCBC+kst and TMAC+kst.
OMAC: One-Key CBC MAC
143
Table 2. Efficiency comparison of CBC MAC and its variants. Name
Domain K len. #K sche.
#E invo.
#E pre.
n m
CBC MAC ({0, 1} ) k 1 |M |/n 0 EMAC ({0, 1}n )+ 2k 2 1 + |M |/n 0 RMAC {0, 1}∗ 2k 1 + #M 1 + (|M | + 1)/n 0 XCBC {0, 1}∗ k + 2n 1 |M |/n 0 TMAC {0, 1}∗ k+n 1 |M |/n 0 XCBC+kst {0, 1}∗ k 2 |M |/n 3 or 4 TMAC+kst {0, 1}∗ k 2 |M |/n 2 or 3 OMAC
A.3
{0, 1}∗
k
1
|M |/n
1
Comparison
Let E : {0, 1}k × {0, 1}n → {0, 1}n be a block cipher, and M ∈ {0, 1}∗ be a message. We show an efficiency comparison of CBC MAC and its variants in Table 2, where: – ({0, 1}n )+ denotes the set of bit strings whose lengths are positive multiples of n. – “K len.” denotes the key length. – “#K sche.” denotes the number of block cipher key schedulings. For RMAC, it requires one block cipher key scheduling each time generating a tag. – #M denotes the number messages which the sender has MACed. – “#E invo.” denotes the number of block cipher invocations to generate a tag for a message M , assuming |M | > 0. – “#E pre.” denotes the number of block cipher invocations during the preprocessing time. These block cipher invocations can be done without the message. For XCBC+kst and TMAC+kst, the block cipher is assumed to be the AES. Next, let E : {0, 1}k × {0, 1}n → {0, 1}n be the underlying block cipher used XCBC, TMAC and OMAC. In Table 3, we show a security comparison of XCBC, TMAC and OMAC. We see that there is no significant difference among them. They are equally secure up to the birthday paradox limit.
B
Proof of Lemma 3
If A is a finite multiset then #A denotes the number of elements in A. Let {a, b, c, . . .} be a finite multiset of bit strings. That is, a ∈ {0, 1}∗ , b ∈ {0, 1}∗ , c ∈ {0, 1}∗ , . . . hold. We say “{a, b, c, . . .} are distinct” if there exists no element occurs twice or more. Equivalently, {a, b, c, . . .} are distinct if any two elements in {a, b, c, . . .} are distinct. Before proving Lemma 3, we need the following lemma. Lemma 6. Let q1 , q2 , q3 , q4 , q5 , q6 be six non-negative integers. For 1 ≤ i ≤ 6, (1) (q ) (1) (q ) let xi , . . . , xi i be fixed n-bit strings such that {xi , . . . , xi i } are distinct. (1) (q ) Similarly, for 1 ≤ i ≤ 6, let yi , . . . , yi i be fixed n-bit strings such that
144
Tetsu Iwata and Kaoru Kurosawa Table 3. Security comparison of XCBC, TMAC and OMAC. Name
Security Bound
(4m2 + 1)q 2 + 1 + 3 · Advprp E (t , q ), 2n [3, Corollary 2] where t = t + O(mq) and q = mq. (3m2 + 1)q 2 + 1 TMAC Advmac + Advprp TMAC (t, q, nm) ≤ E (t , q ), 2n [9, Theorem 5.1] where t = t + O(mq) and q = mq. (5m2 + 1)q 2 + 1 mac OMAC AdvOMAC (t, q, nm) ≤ + Advprp E (t , q ), n 2 [Theorem 5.1] where t = t + O(mq) and q = mq + 1. Advmac XCBC (t, q, nm) ≤
XCBC
(1)
(q )
(1)
(q )
– {y1 , . . . , y1 1 } ∪ {y2 , . . . , y2 2 } are distinct, and (1) (q ) (1) (q ) (1) (q ) (1) (q ) – {y3 , . . . , y3 3 }∪{y4 , . . . , y4 4 }∪{y5 , . . . , y5 5 }∪{y6 , . . . , y6 6 } are distinct. Let P ∈ Perm(n) and Rnd ∈ {0, 1}n . Then the number of (P, Rnd) which satisfies (i) (i) Q1 (x1 ) = y1 for 1 ≤ ∀ i ≤ q1 , (i) (i) Q2 (x2 ) = y2 for 1 ≤ ∀ i ≤ q2 , (i) (i) Q3 (x3 ) = y3 for 1 ≤ ∀ i ≤ q3 , (8) (i) (i) ∀ Q (x ) = y for 1 ≤ i ≤ q , 4 4 4 4 (i) (i) ∀ Q5 (x5 ) = y5 for 1 ≤ i ≤ q5 and (i) (i) Q6 (x6 ) = y6 for 1 ≤ ∀ i ≤ q6 is at least (2n − (q + q 2 /2) · (1 + · 2n )) · (2n − q)!, where q = q1 + · · · + q6 and = max{1 , . . . , 6 }. (1)
(q )
Proof. At the top level, we consider two cases: Cst ∈ {x1 , . . . , x1 1 } and Cst ∈ (1) (q ) {x1 , . . . , x1 1 }. (1)
(q )
Case 1: Cst ∈ {x1 , . . . , x1 1 }. Let c be a unique integer such that 1 ≤ c ≤ q1 (c) and Cst = x1 . Let l be an n-bit variable. First, observe that: #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j
(i)
(j)
(c)
≤ q2 , x1 = x2 ⊕ y1 ⊕ l} ≤ q1 q2 , (i) (j) (c) ≤ q3 , x1 = x3 ⊕ y1 ⊕ l ⊕ Hl (Cst1 )} ≤ q1 q3 · 4 · 2n , (i) (j) (c) ≤ q4 , x1 = x4 ⊕ y1 ⊕ l ⊕ Hl (Cst2 )} ≤ q1 q4 · 5 · 2n , (i) (j) ≤ q5 , x1 = x5 ⊕ Hl (Cst1 )} ≤ q1 q5 · 1 · 2n , (i) (j) ≤ q6 , x1 = x6 ⊕ Hl (Cst2 )} ≤ q1 q6 · 2 · 2n , (i) (j) ≤ q3 , x2 = x3 ⊕ Hl (Cst1 )} ≤ q2 q3 · 1 · 2n , (i) (j) ≤ q4 , x2 = x4 ⊕ Hl (Cst2 )} ≤ q2 q4 · 2 · 2n , (i) (c) (j) ≤ q5 , x2 ⊕ y1 ⊕ l = x5 ⊕ Hl (Cst1 )} ≤ q2 q5 · 4 · 2n , (i) (c) (j) ≤ q6 , x2 ⊕ y1 ⊕ l = x6 ⊕ Hl (Cst2 )} ≤ q2 q6 · 5 · 2n ,
OMAC: One-Key CBC MAC
#{l | 1 ≤ ∃ i ≤ q3 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q3 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q3 , 1 ≤ ∃ j ≤ q3 q6 · 6 · 2n , #{l | 1 ≤ ∃ i ≤ q4 , 1 ≤ ∃ j ≤ q4 q5 · 6 · 2n , #{l | 1 ≤ ∃ i ≤ q4 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q5 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{l | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j
145
≤ q4 , x3 ⊕ Hl (Cst1 ) = x4 ⊕ Hl (Cst2 )} ≤ q3 q4 · 3 · 2n , (i) (c) (j) ≤ q5 , x3 ⊕ y1 ⊕ l = x5 } ≤ q3 q5 , (i) (c) (j) ≤ q6 , x3 ⊕ y1 ⊕ l ⊕ Hl (Cst1 ) = x6 ⊕ Hl (Cst2 )} (i)
(j)
(i)
(c)
(i)
(c)
(j)
≤ q5 , x4 ⊕ y1 ⊕ l ⊕ Hl (Cst2 ) = x5 ⊕ Hl (Cst1 )} (j)
≤ q6 , x4 ⊕ y1 ⊕ l = x6 } ≤ q4 q6 , (i) (j) ≤ q6 , x5 ⊕ Hl (Cst1 ) = x6 ⊕ Hl (Cst2 )} ≤ q5 q6 · 3 · 2n , (i) (c) (j) ≤ q 3 , y1 ⊕ y 1 ⊕ l = y 3 } ≤ q 1 q 3 , (i) (c) (j) ≤ q 4 , y1 ⊕ y 1 ⊕ l = y 4 } ≤ q 1 q 4 , (i) (c) (j) ≤ q 5 , y1 ⊕ y 1 ⊕ l = y 5 } ≤ q 1 q 5 , (i) (c) (j) ≤ q 6 , y1 ⊕ y 1 ⊕ l = y 6 } ≤ q 1 q 6 , (i) (c) (j) ≤ q 3 , y2 ⊕ y 1 ⊕ l = y 3 } ≤ q 2 q 3 , (i) (c) (j) ≤ q 4 , y2 ⊕ y 1 ⊕ l = y 4 } ≤ q 2 q 4 , (i) (c) (j) ≤ q5 , y2 ⊕ y1 ⊕ l = y5 } ≤ q2 q5 , and (i) (c) (j) ≤ q 6 , y2 ⊕ y 1 ⊕ l = y 6 } ≤ q 2 q 6 ,
from the conditions in Sec. 3. We now fix any l which is not included in any of the above twenty-three sets. We have at least (2n −(q1 q2 +q1 q3 ·4 ·2n +q1 q4 ·5 ·2n +q1 q5 ·1 ·2n +q1 q6 ·2 ·2n + q2 q3 ·1 ·2n +q2 q4 ·2 ·2n +q2 q5 ·4 ·2n +q2 q6 ·5 ·2n +q3 q4 ·3 ·2n +q3 q5 +q3 q6 ·6 ·2n + q4 q5 ·6 ·2n +q4 q6 +q5 q6 ·3 ·2n +q1 q3 +q1 q4 +q1 q5 +q1 q6 +q2 q3 +q2 q4 +q2 q5 +q2 q6 )) ≥ (2n − q 2 · · 2n /2 − q 2 /2) choice of such l. (c) Now we let L ← l and Rnd ← l ⊕ y1 . Then we have: (1)
(q )
(1)
(q )
(1)
– the inputs to P , {x1 , . . . , x1 1 , x2 ⊕ Rnd, . . . , x2 2 ⊕ Rnd, x3 ⊕ Rnd ⊕ (q ) (1) (q ) HL (Cst1 ), . . . , x3 3 ⊕Rnd⊕HL (Cst1 ), x4 ⊕Rnd⊕HL (Cst2 ), . . . , x4 4 ⊕Rnd⊕ (1) (q ) (1) (q ) HL (Cst2 ), x5 ⊕HL (Cst1 ), . . . , x5 5 ⊕HL (Cst1 ), x6 ⊕HL (Cst2 ), . . . , x6 6 ⊕ HL (Cst2 )}, are distinct, and (1) (q ) (1) (q ) – the corresponding outputs, {y1 ⊕ Rnd, . . . , y1 1 ⊕ Rnd, y2 ⊕ Rnd, . . . , y2 2 ⊕ (1) (q ) (1) (q ) (1) (q ) (1) (q ) Rnd, y3 , . . . , y3 3 , y4 , . . . , y4 4 , y5 , . . . , y5 5 , y6 , . . . , y6 6 }, are distinct. In other words, for P , the above q1 + q2 + q3 + q4 + q5 + q6 input-output pairs are determined. The remaining 2n −(q1 +q2 +q3 +q4 +q5 +q6 ) input-output pairs are undetermined. Therefore we have (2n − (q1 + q2 + q3 + q4 + q5 + q6 ))! = (2n − q)! possible choice of P for any such fixed (L, Rnd). (1)
(q )
Case 2: Cst ∈ {x1 , . . . , x1 1 }. In this case, we count the number of Rnd and L independently. Then similar to Case 1, observe that: #{Rnd | 1 ≤ ∃ i ≤ q2 , Cst = x2 ⊕ Rnd} ≤ q2 , (i) (j) #{Rnd | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j ≤ q2 , x1 = x2 ⊕ Rnd} ≤ q1 q2 , (i) (j) #{Rnd | 1 ≤ ∃ i ≤ q3 , 1 ≤ ∃ j ≤ q5 , x3 ⊕ Rnd = x5 } ≤ q3 q5 , (i) (j) #{Rnd | 1 ≤ ∃ i ≤ q4 , 1 ≤ ∃ j ≤ q6 , x4 ⊕ Rnd = x6 } ≤ q4 q6 , (i)
146
Tetsu Iwata and Kaoru Kurosawa
#{Rnd | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j #{Rnd | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j
(i)
(j)
≤ q3 , y1 ⊕ Rnd = y3 } ≤ q1 q3 , (i) (j) ≤ q4 , y1 ⊕ Rnd = y4 } ≤ q1 q4 , (i) (j) ≤ q5 , y1 ⊕ Rnd = y5 } ≤ q1 q5 , (i) (j) ≤ q6 , y1 ⊕ Rnd = y6 } ≤ q1 q6 , (i) (j) ≤ q3 , y2 ⊕ Rnd = y3 } ≤ q2 q3 , (i) (j) ≤ q4 , y2 ⊕ Rnd = y4 } ≤ q2 q4 , (i) (j) ≤ q5 , y2 ⊕ Rnd = y5 } ≤ q2 q5 , and (i) (j) ≤ q6 , y2 ⊕ Rnd = y6 } ≤ q2 q6 .
We fix any Rnd which is not included in any of the above twelve sets. We have at least (2n − (q2 + q1 q2 + q3 q5 + q4 q6 + q1 q3 + q1 q4 + q1 q5 + q1 q6 + q2 q3 + q2 q4 + q2 q5 + q2 q6 )) ≥ (2n − q − q 2 /2) choice of such Rnd. Next we see that: #{L | 1 ≤ ∃ i ≤ q3 , Cst = x3 ⊕ Rnd ⊕ HL (Cst1 )} ≤ q3 · 1 · 2n , (i) #{L | 1 ≤ ∃ i ≤ q4 , Cst = x4 ⊕ Rnd ⊕ HL (Cst2 )} ≤ q4 · 2 · 2n , (i) #{L | 1 ≤ ∃ i ≤ q5 , Cst = x5 ⊕ HL (Cst1 )} ≤ q5 · 1 · 2n , (i) #{L | 1 ≤ ∃ i ≤ q6 , Cst = x6 ⊕ HL (Cst2 )} ≤ q6 · 2 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j ≤ q3 , x1 = x3 ⊕ Rnd ⊕ HL (Cst1 )} ≤ q1 q3 · 1 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j ≤ q4 , x1 = x4 ⊕ Rnd ⊕ HL (Cst2 )} ≤ q1 q4 · 2 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j ≤ q5 , x1 = x5 ⊕ HL (Cst1 )} ≤ q1 q5 · 1 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q1 , 1 ≤ ∃ j ≤ q6 , x1 = x6 ⊕ HL (Cst2 )} ≤ q1 q6 · 2 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j ≤ q3 , x2 = x3 ⊕ HL (Cst1 )} ≤ q2 q3 · 1 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j ≤ q4 , x2 = x4 ⊕ HL (Cst2 )} ≤ q2 q4 · 2 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j ≤ q5 , x2 ⊕ Rnd = x5 ⊕ HL (Cst1 )} ≤ q2 q5 · 1 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q2 , 1 ≤ ∃ j ≤ q6 , x2 ⊕ Rnd = x6 ⊕ HL (Cst2 )} ≤ q2 q6 · 2 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q3 , 1 ≤ ∃ j ≤ q4 , x3 ⊕ HL (Cst1 ) = x4 ⊕ HL (Cst2 )} n ≤ q3 q4 · 3 · 2 , (i) (j) #{L | 1 ≤ ∃ i ≤ q3 , 1 ≤ ∃ j ≤ q6 , x3 ⊕ Rnd ⊕ HL (Cst1 ) = x6 ⊕ HL (Cst2 )} ≤ q3 q6 · 3 · 2n , (i) (j) #{L | 1 ≤ ∃ i ≤ q4 , 1 ≤ ∃ j ≤ q5 , x4 ⊕ Rnd ⊕ HL (Cst2 ) = x5 ⊕ HL (Cst1 )} n ≤ q4 q5 · 3 · 2 , (i) (j) #{L | 1 ≤ ∃ i ≤ q5 , 1 ≤ ∃ j ≤ q6 , x5 ⊕ HL (Cst1 ) = x6 ⊕ HL (Cst2 )} n ≤ q5 q6 · 3 · 2 , (i) #{L | 1 ≤ ∃ i ≤ q1 , L = y1 ⊕ Rnd} ≤ q1 , (i) #{L | 1 ≤ ∃ i ≤ q2 , L = y2 ⊕ Rnd} ≤ q2 , (i) #{L | 1 ≤ ∃ i ≤ q3 , L = y3 } ≤ q3 , (i) #{L | 1 ≤ ∃ i ≤ q4 , L = y4 } ≤ q4 , (i) #{L | 1 ≤ ∃ i ≤ q5 , L = y5 } ≤ q5 , and (i) #{L | 1 ≤ ∃ i ≤ q6 , L = y6 } ≤ q6 , (i)
from the conditions in Sec. 3.
OMAC: One-Key CBC MAC
147
We now fix any L which is not included in any of the above twenty-two sets. We have at least (2n −(q3 ·1 ·2n +q4 ·2 ·2n +q5 ·1 ·2n +q6 ·2 ·2n +q1 q3 ·1 ·2n +q1 q4 · 2 ·2n +q1 q5 ·1 ·2n +q1 q6 ·2 ·2n +q2 q3 ·1 ·2n +q2 q4 ·2 ·2n +q2 q5 ·1 ·2n +q2 q6 ·2 · 2n +q3 q4 ·3 ·2n +q3 q6 ·3 ·2n +q4 q5 ·3 ·2n +q5 q6 ·3 ·2n +q1 +q2 +q3 +q4 +q5 +q6 )) ≥ (2n − q · · 2n − q 2 · · 2n /2 − q) choice of such L. Then we have: (1)
(q )
(1)
(q )
(1)
– the inputs to P , {Cst, x1 , . . . , x1 1 , x2 ⊕ Rnd, . . . , x2 2 ⊕ Rnd, x3 ⊕ Rnd ⊕ (q ) (1) (q ) HL (Cst1 ), . . . , x3 3 ⊕Rnd⊕HL (Cst1 ), x4 ⊕Rnd⊕HL (Cst2 ), . . . , x4 4 ⊕Rnd⊕ (1) (q ) (1) (q ) HL (Cst2 ), x5 ⊕HL (Cst1 ), . . . , x5 5 ⊕HL (Cst1 ), x6 ⊕HL (Cst2 ), . . . , x6 6 ⊕ HL (Cst2 )}, are distinct, and (1) (q ) (1) (q ) – the corresponding outputs, {L, y1 ⊕Rnd, . . . , y1 1 ⊕Rnd, y2 ⊕Rnd, . . . , y2 2 (1) (q3 ) (1) (q4 ) (1) (q5 ) (1) (q6 ) ⊕ Rnd, y3 , . . . , y3 , y4 , . . . , y4 , y5 , . . . , y5 , y6 , . . . , y6 }, are distinct. In other words, for P , the above 1 + q1 + q2 + q3 + q4 + q5 + q6 input-output pairs are determined. The remaining 2n − (1 + q1 + q2 + q3 + q4 + q5 + q6 ) input-output pairs are undetermined. Therefore we have (2n −(1+q1 +q2 +q3 +q4 +q5 +q6 ))! = (2n − (1 + q))! possible choice of P for any such fixed (L, Rnd). Completing the Proof. In Case 1, we have at least (2n −(q 2 /2)·(1+·2n ))·(2n −q)! choice of (P, Rnd) which satisfies (8). In Case 2, we have at least (2n − q − q 2 /2) · (2n − q · · 2n − q 2 · · 2n /2 − q) · (2n − (1 + q))! choice of (P, Rnd) which satisfies (8). This bound is at least (2n − (q + q 2 /2) · (1 + · 2n )) · (2n − q)!. This concludes the proof of the lemma. We now prove Lemma 3. Proof (of Lemma 3). For 1 ≤ i ≤ 6, let Oi be either Qi or Pi . The adversary A has oracle access to O1 , . . . , O6 . Since A is computationally unbounded, there is no loss of generality to assume that A is deterministic. There are six types of queries A can make: (Oj , x) which denotes the query “what is Oj (x)?” For the i-th query A makes to Oj , define the query-answer pair (i) (i) (i) (xj , yj ) ∈ {0, 1}n × {0, 1}n , where A’s query was (Oj , xj ) and the answer it (i)
got was yj . Suppose that we run A with oracles O1 , . . . , O6 . For this run, assume that A made qj queries to Oj (·), where q1 + · · · + q6 = q. For this run, we define view v of A as def
(1)
(q1 )
v = (y1 , . . . , y1
(1)
(q2 )
), (y2 , . . . , y2
(1)
(q3 )
), (y3 , . . . , y3
),
(1) (q ) (1) (q ) (1) (q ) (y4 , . . . , y4 4 ), (y5 , . . . , y5 5 ), (y6 , . . . , y6 6 )
For this view, we always have: (1)
(qj )
For 1 ≤ j ≤ 6, {yj , . . . , yj
} are distinct.
.
(9)
148
Tetsu Iwata and Kaoru Kurosawa
We note that since A never repeats a query, for the corresponding queries, we have: (q ) (1) For 1 ≤ j ≤ 6, {xj , . . . , xj j } are distinct. Since A is deterministic, the i-th query A makes is fully determined by the first i − 1 query-answer pairs. This implies that if we fix some qn-bit string V and return the i-th n-bit block as the answer for the i-th query A makes (instead of the oracles), then – – – –
A’s queries are uniquely determined, q1 , . . . , q6 are uniquely determined, the parsing of V into the format defined in (9) is uniquely determined, and the final output of A (0 or 1) is uniquely determined.
Let Vone be a set of all qn-bit strings V such that A outputs 1. We let def None = #Vone . Also, let Vgood be a set of all qn-bit strings V such that: For 1 ≤ ∀ i < ∀ j ≤ q, the i-th n-bit block of V = the j-th n-bit block of V . Note that if V ∈ Vgood then the corresponding parsing v satisfies: (1)
(q )
(1)
(q )
– {y1 , . . . , y1 1 } ∪ {y2 , . . . , y2 2 } are distinct, and (1) (q ) (1) (q ) (1) (q ) (1) (q ) – {y3 , . . . , y3 3 }∪{y4 , . . . , y4 4 }∪{y5 , . . . , y5 5 }∪{y6 , . . . , y6 6 } are distinct. observe that the number of V which is not in the set Vgood is at most Now q 2qn 2 2n . Therefore, we have qn q 2 #{V | V ∈ (Vone ∩ Vgood )} ≥ None − . (10) 2 2n Evaluation of prand . We first evaluate R
prand = Pr(P1 , . . . , P6 ← Perm(n) : AP1 (·),...,P6 (·) = 1) def
=
#{(P1 , . . . , P6 ) | AP1 (·),...,P6 (·) = 1} . {(2n )!}6
For each V ∈ Vone , the number of (P1 , . . . , P6 ) such that For 1 ≤ j ≤ 6, Pj (xj ) = yj for 1 ≤ ∀ i ≤ qj , (i)
is exactly have
n 1≤j≤6 (2
prand =
(11)
− qj )!, which is at most (2n − q)! · {(2n )!}5 . Therefore, we
V ∈Vone
≤
(i)
V ∈Vone
= None ·
#{(P1 , . . . , P6 ) | (P1 , . . . , P6 ) satisfying (11)} {(2n )!}6 (2n − q)! (2n )!
(2n − q)! . (2n )!
OMAC: One-Key CBC MAC
149
Evaluation of preal . We next evaluate R
R
preal = Pr(P ← Perm(n); Rnd ← {0, 1}n : AQ1 (·),...,Q6 (·) = 1) def
#{(P, Rnd) | AQ1 (·),...,Q6 (·) = 1} . (2n )! · 2n
=
Then from Lemma 6, we have
preal ≥
V ∈(Vone ∩Vgood )
≥
V ∈(Vone ∩Vgood )
# {(P, Rnd) | (P, Rnd) satisfying (8)} (2n )! · 2n
(q + q 2 /2) · (1 + · 2n ) (2n − q)! · 1 − . (2n )! 2n
Completing the Proof. From (10) we have
qn (q + q 2 /2) · (1 + · 2n ) (2n − q)! q 2 · 1− · ≥ None − (2n )! 2n 2 2n
qn 2 n (q + q /2) · (1 + · 2n ) q 2 (2 − q)! ≥ prand − · 1− . · (2n )! 2n 2 2n
preal
Since 2qn ·
(2n −q)! (2n )!
≥ 1, we have
(q + q 2 /2) · (1 + · 2n ) q(q − 1) · 1− ≥ prand − 2 · 2n 2n 2 2 n (2q + q) + (q + 2q) · · 2 ≥ prand − 2 · 2n 2 1 3q · ≥ prand − + . 2 2n
preal
(12)
Applying the same argument to 1 − preal and 1 − prand yields that 1 − preal
3q 2 · ≥ 1 − prand − 2
Finally, (12) and (13) give |preal − prand | ≤
C
3q 2 2
·
1 + . 2n 1 2n
+ .
(13)
Proof of Lemma 4
Let S and S be distinct bit strings such that |S| = sn for some s ≥ 1, and def
R
|S | = s n for some s ≥ 1. Define Vn (S, S ) = Pr(P2 ← Perm(n) : CBCP2 (S) = CBCP2 (S )). Then the following proposition is known [3].
150
Tetsu Iwata and Kaoru Kurosawa
Proposition 1 (Black and Rogaway [3]). Let S and S be distinct bit strings such that |S| = sn for some s ≥ 1, and |S | = s n for some s ≥ 1. Assume that s, s ≤ 2n /4. Then (s + s )2 Vn (S, S ) ≤ . 2n Now let M and M be distinct bit strings such that |M | = mn for some def
R
m ≥ 2, and |M | = m n for some m ≥ 2. Define Wn (M, M ) = Pr(P1 , . . . , P6 ← Perm(n) : MOMACP1 ,...,P6 (M ) = MOMACP1 ,...,P6 (M )). We note that P5 and P6 are irrelevant in the event MOMACP1 ,...,P6 (M ) = MOMACP1 ,...,P6 (M ) since M and M are both longer than n bits. Also, P4 is irrelevant in the above event since |M | and |M | are both multiples of n. Further, P3 is irrelevant in the above event since it is invertible, and thus, there is a collision if and only if there is a collision at the input to the last encryption. We show the following lemma. Lemma 7 (MOMAC Collision Bound). Let M and M be distinct bit strings such that |M | = mn for some m ≥ 2, and |M | = m n for some m ≥ 2. Assume that m, m ≤ 2n /4. Then (m + m )2 . 2n Proof. Let M [1] · · · M [m] and M [1] · · · M [m ] be partitions of M and M respectively. We consider two cases: M [1] = M [1] and M [1] = M [1]. Wn (M, M ) ≤
Case 1: M [1] = M [1]. In this case, Let P1 be any permutation in Perm(n), and let S ← (P1 (M [1]) ⊕ M [2]) ◦ M [3] ◦ · · · ◦ M [m] and S ← (P1 (M [1]) ⊕ M [2]) ◦ M [3] ◦ · · · ◦ M [m ]. Observe that MOMACP1 ,...,P6 (M ) = MOMACP1 ,...,P6 (M ) if and only if CBCP2 (S) = CBCP2 (S ), since we may ignore the last encryptions in CBCP2 (S) and CBCP2 (S ). Therefore Wn (M, M ) ≤ Vn (S, S ) ≤
(m + m − 2)2 . 2n
Case 2: M [1] = M [1]. In this case, we split into two cases: P1 (M [1]) ⊕ M [2] = P1 (M [1]) ⊕ M [2] and P1 (M [1]) ⊕ M [2] = P1 (M [1]) ⊕ M [2]. The former event will occur with probability at most 1. The later one will occur with probability at most 2n1−1 , which is at most 22n . Then it is not hard to see that 2 (m + m − 2)2 2 (m + m )2 ≤ + ≤ 2n 2n 2n 2n by applying the similar argument as in Case 1. Wn (M, M ) ≤ 1 · Vn (S, S ) +
n
Let m be an integer such that m ≤ 2 /4. We consider the following four sets. def D1 = {M | M ∈ {0, 1}∗ , n < |M | ≤ mn and |M | is a multiple of n} def D2 = {M | M ∈ {0, 1}∗ , n < |M | ≤ mn and |M | is not a multiple of n} def D3 = {M | M ∈ {0, 1}∗ and |M | = n} def D4 = {M | M ∈ {0, 1}∗ and |M | < n}
OMAC: One-Key CBC MAC
151
We next show the following lemma. Lemma 8. Let q1 , q2 , q3 , q4 be four non-negative integers. For 1 ≤ i ≤ 4, let (1) (q ) (j) Mi , . . . , Mi i be fixed bit strings such that Mi ∈ Di for 1 ≤ j ≤ qi and (1) (q ) (1) (q ) {Mi , . . . , Mi i } are distinct. Similarly, for 1 ≤ i ≤ 4, let Ti , . . . , Ti i be (1) (qi ) fixed n-bit strings such that {Ti , . . . , Ti } are distinct. Then the number of P1 , . . . , P6 ∈ Perm(n) such that (i) (i) MOMACP1 ,...,P6 (M1 ) = T1 for 1 ≤ ∀ i ≤ q1 , (i) (i) MOMACP1 ,...,P6 (M2 ) = T2 for 1 ≤ ∀ i ≤ q2 , (14) (i) (i) MOMACP1 ,...,P6 (M3 ) = T3 for 1 ≤ ∀ i ≤ q3 and (i) (i) MOMACP1 ,...,P6 (M4 ) = T4 for 1 ≤ ∀ i ≤ q4 is at least {(2n )!}6 1 −
2q 2 m2 2n (1)
·
where q = q1 + · · · + q4 .
1 2qn ,
(q1 )
Proof. We first consider M1 , . . . , M1
. The number of (P1 , P2 ) such that
MOMACP1 ,...,P6 (M1 ) = MOMACP1 ,...,P6 (M1 ) for 1 ≤ ∃ i < ∃ j ≤ q1 (i)
(j)
2 is at most {(2n )!}2 · q21 · 4m 2n from Lemma 7. Note that P3 , . . . , P6 are irrelevant in the above event. (1) (q ) We next consider M2 , . . . , M2 2 . The number of (P1 , P2 ) such that MOMACP1 ,...,P6 (M2 ) = MOMACP1 ,...,P6 (M2 ) for 1 ≤ ∃ i < ∃ j ≤ q2 (i)
(j)
2 is at most {(2n )!}2 · q22 · 4m 2n from Lemma 7. Now we fix any (P1 , P2 ) which is not like the above. We have at least q2 4m2 2 n {(2 )!}2 1 − q21 · 4m choice. 2n − 2 · 2n Now P1 and P2 are fixed in such a way that the inputs to P3 are distinct and (1) (q ) the inputs to P4 are distinct. Also, the corresponding outputs {T3 , . . . , T3 3 } (1) (q4 ) are distinct, and {T4 , . . . , T4 } are distinct. We know that the inputs to P5 are (1) (q ) distinct, and the corresponding outputs {T3 , . . . , T3 3 } are distinct. Also, the (1) (q ) inputs to P6 are distinct, and and the corresponding {T4 , . . . , T4 4 } outputs 2 q2 4m2 · are distinct. Therefore, we have at least {(2n )!}2 1 − q21 · 4m 2n − 2 · 2n n n n n (2 − q1 )! · (2 − q2 )! · (2 − q3 )! · (2 − q4 )! choice of P1 , . . . , P6 which satisfies 2
2
(14). This bound is at least {(2n )!}6 1 − 2q2nm This concludes the proof of the lemma.
·
1 2qn
since (2n − qi )! ≥
(2n )! 2qi n .
We now prove Lemma 4. Proof (of Lemma 4). Let O be either MOMACP1 ,...,P6 or R. Since A is computationally unbounded, there is no loss of generality to assume that A is deterministic.
152
Tetsu Iwata and Kaoru Kurosawa
Similar to the proof of Lemma 3, for the query A makes to the oracle O, (i) (i) define the query-answer pair (Mj , Tj ) ∈ Dj × {0, 1}n , where A’s i-th query in Dj was Mj ∈ Dj and the answer it got was Tj ∈ {0, 1}n . Suppose that we run A with the oracle. For this run, assume that A made qj queries in Dj , where 1 ≤ j ≤ 4 and q1 + · · · + q4 = q. For this run, we define view v of A as (i)
(i)
def
(1)
(q1 )
v = (T1 , . . . , T1
(1)
(q2 )
), (T2 , . . . , T2
),
(1) (q ) (1) (q ) (T3 , . . . , T3 3 ), (T4 , . . . , T4 4 )
.
(15)
Since A is deterministic, the i-th query A makes is fully determined by the first i − 1 query-answer pairs. This implies that if we fix some qn-bit string V and return the i-th n-bit block as the answer for the i-th query A makes (instead of the oracle), then – – – –
A’s queries are uniquely determined, q1 , . . . , q4 are uniquely determined, the parsing of V into the format defined in (15) is uniquely determined, and the final output of A (0 or 1) is uniquely determined.
Let Vone be a set of all qn-bit strings V such that A outputs 1. We let def None = #Vone . Also, let Vgood be a set of all qn-bit strings V such that: For 1 ≤ ∀ i < ∀ j ≤ q, the i-th n-bit block of V = the j-th n-bit block of V . Note that if V ∈ Vgood , then the corresponding parsing v of V satisfies that: (1) (q ) (1) (q ) (1) (q ) {T1 , . . . , T1 1 } are distinct, {T2 , . . . , T2 2 } are distinct, {T3 , . . . , T3 3 } are (1) (q ) distinct and {T4 , . . . , T4 4 } are distinct. observe that the number of V Now qn which is not in the set Vgood is at most 2q 22n . Therefore, we have qn q 2 #{V | V ∈ (Vone ∩ Vgood )} ≥ None − . (16) 2 2n Evaluation of prand . We first evaluate R
prand = Pr(R ← Rand(∗, n) : AR(·) = 1) . def
Then it is not hard to see
prand =
V ∈Vone
1 None = qn . 2qn 2
Evaluation of preal . We next evaluate def
R
preal = Pr(P1 , . . . , P6 ← Perm(n) : AMOMACP1 ,...,P6 (·) = 1) =
#{(P1 , . . . , P6 ) | AMOMACP1 ,...,P6 (·) = 1} . {(2n )!}6
OMAC: One-Key CBC MAC
153
Then from Lemma 8, we have preal ≥
V ∈(Vone ∩Vgood )
≥
V ∈(Vone ∩Vgood )
# {(P1 , . . . , P6 ) | (P1 , . . . , P6 ) satisfying (14)} {(2n )!}6
2q 2 m2 1− 2n
1 . 2qn
·
Completing the Proof. From (16) we have
qn 2q 2 m2 1 q 2 preal ≥ None − · 1 − · qn n n 2 2 2 2
2q 2 m2 q 1 · 1− = prand − 2n 2 2n q 1 2q 2 m2 ≥ prand − − n 2n 2 2 2q 2 m2 + q 2 ≥ prand − . 2n
(17)
Applying the same argument to 1 − preal and 1 − prand yields that 1 − preal ≥ 1 − prand − Finally, (17) and (18) give |preal − prand | ≤
2q 2 m2 + q 2 . 2n
2q 2 m2 +q 2 . 2n
(18)
A Concrete Security Analysis for 3GPP-MAC Dowon Hong1 , Ju-Sung Kang1 , Bart Preneel2 , and Heuisu Ryu1 1 Information Security Technology Division, ETRI 161 Kajong-Dong, Yusong-Gu, Taejon, 305-350, Korea {dwhong,jskang,hsryu}@etri.re.kr 2 Katholieke Universiteit Leuven, ESAT/COSIC Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium
[email protected] Abstract. The standardized integrity algorithm f 9 of the 3GPP algorithm computes a MAC (Message Authentication Code) to establish the integrity and the data origin of the signalling data over a radio access link of W-CDMA IMT-2000. The function f 9 is based on the block cipher KASUMI and it can be considered as a variant of CBC-MAC. In this paper we examine the provable security of f 9. We prove that f 9 is a secure pseudorandom function by giving a concrete bound on an adversary’s inability to forge a MAC value in terms of her inability to distinguish the underlying block cipher from a random permutation. Keywords: Message authentication code, 3GPP-MAC, Provable security, Pseudo-randomness.
1
Introduction
Within the security architecture of 3GPP (the 3rd Generation Partnership Project) a standardized data authentication algorithm f 9 has been defined; this MAC (Message Authentication Code) algorithm is a variant of the standard CBC-MAC (Cipher Block Chaining) based on the block cipher KASUMI [22]. We refer to this MAC algorithm as “3GPP-MAC.” The purpose of this work is to provide a proof of security for the 3GPP-MAC algorithm. Providing a security proof in the sense of reduction-based cryptography intuitively means that one proves the following statement: if there exists an adversary A that breaks a given MAC built from a block cipher E, then there exists a corresponding adversary A that breaks the block cipher E. The provable security treatment of MACs based on a block cipher started by Bellare et al. [1]. They have provided such a security proof for CBC-MAC. However, their proof is restricted to the case where the input messages are of fixed length. It is well known that CBC-MAC is not secure when the message length is variable [1]. A matching birthday attack has been described by Preneel and van Oorschot in [17]. Petrank and Rackoff [16] were the first to rigorously address the issue of message length variability. They provided a security proof for EMAC (Encrypted CBC-MAC) which handles messages of variable unknown lengths. Black and T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 154–169, 2003. c International Association for Cryptologic Research 2003
A Concrete Security Analysis for 3GPP-MAC
155
Rogaway [3] introduced three refinements to EMAC that improve the efficiency. They also provided a new security proof by using new techniques which treat EMAC as an instance of the Carter-Wegman paradigm [5, 20]. Jaulmes, Joux, and Valette [7] proposed RMAC (Randomized MAC) which is an extension of EMAC. They showed that the security of RMAC improves over the birthday bound of [17] in the ideal-cipher model. This is not a reduction-based provable security result. Note that RMAC is currently being considered for standardization by NIST. However, recently it has been demonstrated that RMAC is vulnerable to related-key attacks [12–14]. Furthermore, it has been shown that it is not possible to provide a proof of security for the salted variant of RMAC [19]. Black and Rogaway [4, 18] have proposed a parallelizable block cipher mode of operation for message authentication (PMAC) together with a security proof. Several other new modes, such as XECB-MAC [6] and TMAC [8] have been submitted to NIST for consideration, but they will probably not be included in the standard [24]. The security evaluation of 3GPP-MAC has primarily been performed by the 3GPP SAGE group (Security Algorithms Group of Experts) [21]. Based on some ad hoc analysis, the general conclusion of [21] is that 3GPP-MAC does not exhibit any security weaknesses. Recently, Knudsen and Mitchell [11] analyzed 3GPP-MAC from the viewpoint of a birthday attack. They have described several types of forgery and key recovery attacks for 3GPP-MAC; they have also shown that key recovery attacks are infeasible: the most efficient attack requires around 3 × 248 chosen messages. We believe that it is important to provide a security proof for a MAC algorithm based on an information theoretic and a complexity theoretic analysis. Such a security proof can be considered as a theoretical evidence of the soundness of the overall structure of a MAC algorithm. However so far no security proof has been provided in the literature for 3GPP-MAC. This observation motivates this paper. In this paper we prove that 3GPP-MAC is secure in the sense of reductionbased cryptography. More specifically, we prove that 3GPP-MAC is a pseudorandom function which means that no attacker with polynomially many queries can distinguish 3GPP-MAC from a perfect random function; by using this fact, we show that 3GPP-MAC is a secure MAC algorithm under the assumption that the underlying block cipher is a pseudorandom permutation. This assumption is a reasonable one since the pseudorandomness of the 3GPP block cipher KASUMI has recently been investigated by Kang et al. [9, 10]. We do not address the question whether the distinguishing bound we have obtained is sufficiently tight or not. We leaves this as an open problem.
2 2.1
Preliminaries Notation
Let {0, 1}n denote the set of all n-bit strings, and {0, 1}n∗ be the set of all binary strings whose bit-lengths are positive multiples of n. Let Rn∗→l be the set of all functions λ : {0, 1}n∗ → {0, 1}l , Pn be the set of all permutations
156
Dowon Hong et al. M [1]
M [2]
M [r]
... I [1]
EK O [1]
I [2]
I [r]
EK
EK'
EK ... ...
O[2]
O [r]
3GPP-MACK(M ) OK(M )
(left-most l -bit)
Fig. 1. The 3GPP-MAC algorithm.
π : {0, 1}n → {0, 1}n , and K be the key space which is the set of all possible key values K. For any given key space K, message space {0, 1}n∗ , and codomain {0, 1}l , a MAC is a map F : K × {0, 1}n∗ → {0, 1}l . A MAC F can be regarded as a family of functions from {0, 1}n∗ to {0, 1}l indexed by a key K ∈ K. In fact, F is a multiset since two or more different keys may define the same function. Let E : K × {0, 1}n → {0, 1}n be a block cipher; then EK (X) = Y denotes that E uses a key K ∈ K to encrypt an n-bit string X to an n-bit ciphertext Y . 2.2
The 3GPP-MAC Algorithm
The 3GPP-MAC algorithm operates as follows. Suppose the underlying block cipher E has n-bit input and output blocks. Every message M in 3GPP-MAC is first padded such that the length is a multiple of n. The padding string in 3GPPMAC is appended even if the size of the message is already a multiple of n; it is of the following form: Count || F resh || M essage || Direction || 1 || 00 · · · 0, where Count, F resh, and Direction are system dependent parameters. Throughout this paper we assume that the lengths of all messages are multiples of n since the details of the padding scheme are not relevant for our proof of security. The 3GPP-MAC algorithm uses a pair of 128-bit keys K and K , where K = K ⊕ Const and Const = 0xAA · · · A. For any r-block message M = M [1] · · · M [r], 3GPP-MAC is computed as follows: O[0] ← 0 for i = 1, · · · , r do I[i] ← O[i − 1] ⊕ M [i] O[i] ← EK (I[i]) OK (M ) ← O[1] ⊕ O[2] ⊕ · · · ⊕ O[r] MK (M ) ← the leftmost l bits of EK (OK (M )) return MK (M ) Here MK (M ) is the 3GPP-MAC value of the message M . The 3GPP-MAC algorithm is also depicted in Fig. 1. The 3GPP integrity algorithm f 9 in the 3GPP technical specification [22] states that the underlying block cipher is KA-
A Concrete Security Analysis for 3GPP-MAC
157
SUMI: KASUMI is a 64-bit block cipher with a 128-bit key. The 3GPP-MAC value consists of the leftmost 32 bits of the final encryption or l = 32. Note that in the 3GPP-MAC algorithm, K and K should be distinct to handle variable length messages. In fact, it is easy to break the 3GPP-MAC algorithm if K = K . For example, if an adversary requests MK (X) for a 1-block message X, obtaining T , and requests MK (0) of a 1-block message 0, obtaining S, then she can compute the MAC MK (X||0||T ⊕ X||0||T ) = S. In other words, from the MACs of X and 0, one can forge the MAC of X||0||MK (X) ⊕ X||0||MK (X) without knowing the key K. 2.3
Comparison between CBC-MAC, EMAC, and 3GPP-MAC
The basic CBC-MAC algorithm [23] works as follows: for any r-block message M = M [1] · · · M [r], the CBC-MAC of M under the key K is defined as CBCEK (M ) = Cr , where Ci = EK (M [i] ⊕ Ci−1 ) for i = 1, · · · , r and C0 = 0. The CBC-MAC is illustrated in Fig. 2. M [1]
M [2]
M [r]
...
EK
EK
C1
C2
EK ...
C r = CBC EK (M )
Fig. 2. The CBC-MAC algorithm.
It is well known that CBC-MAC is secure for messages of constant length, while it is insecure for arbitrary variable length messages [1]. There have been several efforts to design a variant of CBC-MAC for variable length messages. Bellare et al. [1] have suggested three variants of CBC-MAC, Input-length key separation, Length-prepending, and Encrypt last block, to handle variable length messages. Out of these three variants the most attractive method is the last one, since the length of message is not needed until the end of the computation. The method of encrypting the last block is called the EMAC; it has been proposed by the RIPE project in 1993 [2] and subsequently included in the ISO standard [23]; its security has been rigorously analyzed by Petrank and Rackoff [16]. For any r-block message M = M [1] · · · M [r], EMAC of M is defined as EM ACEK1 ,EK2 (M ) = EK2 (CBCEK1 (M )), where K1 and K2 are two different keys in K. The EMAC algorithm is depicted in Fig. 3. In fact, Petrank and Rackoff [16] used one secret key K to produce two secret keys K1 = EK (0) and K2 = EK (1), and they regarded EK1 and EK2 as two independently chosen random functions f1 and f2 for the proof of security.
158
Dowon Hong et al.
M [1]
M [2]
M [r]
...
EK1
EK1
C1
C2
EK1 ...
Cr
EK2 EMAC EK ,EK( M ) 1
2
Fig. 3. The EMAC algorithm.
In order to optimize efficiency for constructions that accept arbitrary bit strings, Black and Rogaway [3] refined EMAC in three methods which they called ECBC, FCBC, and XCBC, respectively. On the other hand, 3GPP-MAC can be seen as a variant of EMAC. There are two differences between 3GPP-MAC and EMAC. First, 3GPP-MAC uses a pair of keys K and K such that K is straightforwardly derived from K by XORing the fixed constant, but in EMAC, two keys K1 and K2 are obtained by encrypting two plaintexts 0 and 1 with the same key K. Thus we cannot regard EK and EK as two independently chosen random functions f and f for the proof of security. This situation of 3GPP-MAC is different from that of EMAC. Second, while 3GPP-MAC uses CBCEK (M )⊕(C1 ⊕C2 ⊕· · ·⊕Cr−1 ) as the input of the final block computation EK , EMAC uses CBCEK1 (M ) without XORing Ci ’s as the input of the final computation EK2 . These two distinct points give rise to a different security proof for EMAC and 3GPP-MAC. 2.4
Security Model
We consider the following security model. Let A be an adversary and AO denote that A can access an oracle O. Without loss of generality, adversaries are assumed to never ask a query outside the domain of the oracle, and to never repeat a query. For any g ∈ F, we say that A forges g if A outputs g(x) for some x ∈ {0, 1}n∗ where Ag never queried x to its oracle g. Define R mac (A) = Pr A forges g | g ← F , AdvF R
where g ← F denotes the experiment of choosing a random element from F. Assume that for any random function λ ∈ Rn∗→l , the value of λ(x) is a uniformly chosen l-bit string from {0, 1}l , for each x ∈ {0, 1}n∗ . That is, for any λ ∈ Rn∗→l , x ∈ {0, 1}n∗ , and y ∈ {0, 1}l , Pr(λ(x) = y) = 2−l . This is a reasonable assumption since for any uniformly chosen function g : {0, 1}m → {0, 1}l , Pr(g(x) = y) = 2−l for each x ∈ {0, 1}m and y ∈ {0, 1}l , regardless of the input length m. We define the advantage of an adversary A to distinguish a MAC F from the family of random functions Rn∗→l as R R Rn∗→l (A) = Pr Ag outputs 1 | g ← F −Pr Aλ outputs 1 | λ ← Rn∗→l . AdvF
A Concrete Security Analysis for 3GPP-MAC
159
We overload the notation defined above and write that mac mac (t, q, σ) = max{AdvF (A)} AdvF A
and
Rn∗→l Rn∗→l AdvF (t, q, σ) = max{AdvF (A)} , A
where the maximum is over all adversaries A who run in time at most t and ask its oracle q queries having aggregate length of σ blocks. On the other hand, we regard the block cipher Λn as a family of permutations from {0, 1}n to itself indexed by a secret key K ∈ K. Define R R Pn f π − Pr A AdvΛ (A) = Pr A outputs 1 | f ← Λ outputs 1 | π ← P n n n and Pn Pn AdvΛ (t, q) = max{AdvΛ (A)} , n n A
where the maximum is over all distinguishers A that run in time at most t and make at most q queries. In what follows it will be convenient for us to think of 3GPP-MAC as using two functions f and f instead of EK and EK , respectively. We do this by denoting f to be EK for a randomly chosen key K and f to be EK for a second key K . Note that f is derived from f . Now, we may write ¯ f (M )) , Mf (M ) = the leftmost l bits of f (O ¯ f (M ) = O[1] ⊕ O[2] ⊕ · · · ⊕ O[r], O[i] = f (I[i]), I[i] = O[i − 1] ⊕ M [i] where O for 1 ≤ i ≤ r, and O[0] = 0. We consider two function families related to 3GPP-MAC. A family MΛn for a block cipher Λn is the set of all functions Mf for all f ∈ Λn and a family MPn is the set of all functions Mπ for all π ∈ Pn . The Mπ is similarly defined as Mf by considering π and π instead of f and f , that is, for any message M , ¯ π (M )) , Mπ (M ) = the leftmost l bits of π (O where π ∈ Pn − {π} is automatically determind by π. Note that our result in the next section have nothing to do with the method of determining π from π.
3 3.1
The Security of 3GPP-MAC Main Results
In this section we prove that the security of MΛn is implied by the security of the underlying block cipher Λn . We call a block cipher secure if it is a pseudorandom permutation: this means that no attacker with polynomially many encryption queries can distinguish the block cipher from a perfect random permutation.
160
Dowon Hong et al.
This approach to modeling the security of a block cipher was introduced by Luby and Rackoff [15]. We first give the following information-theoretic bound on the security of 3GPP-MAC. We start by checking the possibility of distinguishing a random function in Rn∗→l from a random function in MPn . We show that even a computationally unbounded adversary cannot obtain a too large advantage. Theorem 1 Let A be an adversary that makes queries to a random function chosen either from MPn or from Rn∗→l . Suppose that A asks its oracle q queries, these queries having aggregate length of σ blocks. Then Rn∗→l AdvM (A) ≤ Pn
(σ 2 + 2q 2 ) . 2n
The proof of Theorem 1 will be given in Sect. 3.2. It is a well-known result that if a MAC algorithm preserves pseudorandomness, it resists an existential forgery under adaptive chosen message attacks [1]. By using this fact and Theorem 1, we can obtain the main result: Theorem 2 Let Λn : K×{0, 1}n → {0, 1}n be a family of permutations obtained from a block cipher. Then (σ 2 + 2q 2 ) 2n
(3.1)
(σ 2 + 2q 2 ) 1 + l , 2n 2
(3.2)
Rn∗→l Pn AdvM (t, q, σ) ≤ AdvΛ (t , σ) + n Λn
and Pn mac AdvM (t, q, σ) ≤ AdvΛ (t , σ) + Λn n
where t = t + O(σn). Proof. Let A be an adversary distinguishing MΛn from Rn∗→l which makes at most q oracle queries having aggregate length of σ blocks and runs in time at most t. In order to prove equation (3.1), we first show that there exists an adversary BA which distinguishes Λn from Pn such that Rn∗→l Rn∗→l Pn AdvΛ (BA ) = AdvM (A) − AdvM (A) , n Λn Pn
where BA makes at most σ queries and runs in time at most t = t + O(σn). The adversary BA gets an oracle f : {0, 1}n → {0, 1}n , a permutation chosen from Λn or Pn . It will run A as a subroutine, using f to simulate the oracle h : {0, 1}n∗ → {0, 1}l that A expects. f Adversary BA for i = 1, · · · , q do when A asks its oracle a query Mi , answer with Mf (Mi ) end for A outputs a bit b return b
A Concrete Security Analysis for 3GPP-MAC
161
The oracle supplied to A by BA is Mf , where f is BA ’s oracle, and hence R R f f Pn AdvΛ − Pr B (B ) = Pr B = 1 | f ← Λ = 1 | f ← P A n n A A n R R = Pr Ah = 1 | h ← MΛn − Pr Ah = 1 | h ← MPn . However
R R Rn∗→l h h AdvM − Pr A . (A) = Pr A = 1 | h ← M = 1 | h ← R P n∗→l n Pn
Therefore by taking the sum of the two equations above, we obtain that Rn∗→l Pn (BA ) + AdvM (A) AdvΛ n Pn R R = Pr Ah = 1 | h ← MΛn − Pr Ah = 1 | h ← Rn∗→l Rn∗→l = AdvM (A) . Λn
From this equation and the result of Theorem 1, we get Rn∗→l Pn (BA ) ≥ AdvM (A) − AdvΛ n Λn
(σ 2 + 2q 2 ) , 2n
and the equation (3.1) follows, since Rn∗→l Rn∗→l AdvM Adv (t, q, σ) = max (A) MΛn Λn A (σ 2 + 2q 2 ) Pn ≤ max AdvΛ (B ) + A n A 2n (σ 2 + 2q 2 ) Pn ≤ AdvΛ (t , σ) + . n 2n Using Proposition 2.7 of [1], we can easily show that Rn∗→l mac AdvM (t, q, σ) ≤ AdvM (t , q, σ) + Λn Λn
1 , 2l
(3.3)
where t = t + O(σn). Combining (3.1) and (3.3) we obtain the equation (3.2) which completes the proof. 3.2
Proof of Theorem 1
Remember that the second permutation π ¯ in Mπ (·) is derived from π. In order to prove Theorem 1 we first prove the result under the condition that the second permutation π ¯ is not related with the first permutation π in 3GPP-MAC. Assume that π and π are chosen independently from Pn . For any r-block message M = M [1] · · · M [r], we set ¯ π (M ) , Mπ,π (M ) = the leftmost l bits of π O
162
Dowon Hong et al.
¯ π (M ) = O[1] ⊕ · · · ⊕ O[r], O[i] = π(I[i]), and I[i] = O[i − 1] ⊕ M [i] for where O 1 ≤ i ≤ r. Let MPn ×Pn be the set of all functions Mπ,π , where π and π are chosen independently from Pn . Lemma 1 below provides an information-theoretic bound on the security of MPn ×Pn . Lemma 1 Let A be an adversary that makes queries to a random function chosen either from MPn ×Pn or from Rn∗→l . Suppose that A asks its oracle q queries, these queries having aggregate length of σ blocks. Then Rn∗→l AdvM (A) ≤ Pn ×Pn
(σ 2 + 2q 2 ) . 2n+1
Proof. To prove Lemma 1 we apply the idea from the proof of PMAC’s security in [18]. Let A be an adversary distinguishing MPn ×Pn from Rn∗→l . Since the adversary A is not limited in computational power, we may assume it is deterministic. One can imagine A interacting with a MPn ×Pn oracle as A playing the following game, denoted Game 1. Game 1: Simulation of MPn ×Pn 1 unusual ← false; for all x ∈ {0, 1}n do π(x) ← undefined, π (x) ← undefined 2 When A makes its t-th query, Mt = Mt [1] · · · Mt [rt ] where t ∈ {1, · · · , q} 3 It [1] ← Mt [1] 4 For i = 1, · · · , rt do 5 A ← {It [j] | 1 ≤ j ≤ i − 1} ∪ {Is [j] | 1 ≤ s ≤ t − 1, 1 ≤ j ≤ rs } 6 if It [i] ∈ A then Ot [i] ← π(It [i]) R 7 else Ot [i] ← {0, 1}n 8 Aπ ← {π(It [j]) | 1 ≤ j ≤ i−1}∪{π(Is [j]) | 1 ≤ s ≤ t−1, 1 ≤ j ≤ rs } R 9 if Ot [i] ∈ Aπ then [ unusual ← true; Ot [i] ← AC π ] 10 π(It [i]) ← Ot [i] 11 if i < rt then It [i + 1] ← Ot [i] ⊕ Mt [i + 1] ¯ t (Mt ) ← Ot [1] ⊕ · · · ⊕ Ot [rt ] 12 O ¯ s (Ms ) | 1 ≤ s ≤ t − 1} 13 B ← {O ¯t ) ] ¯ 14 If Ot (Mt ) ∈ B then [ unusual ← true; M ACt ← π (O R 15 else M ACt ← {0, 1}n ¯ s (Ms )) | 1 ≤ s ≤ t − 1} 16 Bπ ← {π (O R 17 if M ACt ∈ Bπ then [ unusual ← true; M ACt ← BπC ] ¯ t (Mt )) ← M ACt 18 π (O 19 Mπ,π (Mt ) ← the leftmost l-bit of M ACt 20 Return Mπ,π (Mt ) C Here we use AC π and Bπ to denote the complements of Aπ and Bπ , respectively. Two particular permutations π and π are equally likely among all permutations from {0, 1}n to {0, 1}n . In our simulation, we will view the selection of π and π as an incremental procedure. This will be equivalent to selecting π and π uniformly at random. This game perfectly simulates the behavior of MPn ×Pn .
A Concrete Security Analysis for 3GPP-MAC
163
Let UNUSUAL be the event that the flag unusual is set to true in Game 1. In the absence of event UNUSUAL, the returned value Mπ,π (Mt ) at line 20 is random since the leftmost l bits of the string randomly selected at line 15. That is, the adversary sees the returned random values on distinct points. Therefore we get that Rn∗→l AdvM (A) ≤ Pr(UNUSUAL) . Pn ×Pn
(3.4)
First we consider the probability that the flag unusual is set to true in line 9 or 17. In both cases, we have just chosen a random n-bit string and then we check whether it is a element in a set or not. We have that Pr(unusual = true in lines 9 or 17 in Game 1) 1 + 2 + · · · + (σ − 1) + 1 + · · · + (q − 1) ≤ 2n 2 2 σ +q ≤ . 2n+1
(3.5)
Now we can modify Game 1 by changing the behavior when unusual = true, and adding as a compensating factor the bound given by equation (3.5). We omit lines 8, 9, 16 and 17, and the last statement in line 14. The modified game is as follows. Game 2: Simplification of Game 1 1 unusual ← false; for all x ∈ {0, 1}n do π(x) ← undefined, π (x) ← undefined 2 When A makes its t-th query, Mt = Mt [1] · · · Mt [rt ] where t ∈ {1, · · · , q} 3 It [1] ← Mt [1] 4 For i = 1, · · · , rt do 5 A ← {It [j] | 1 ≤ j ≤ i − 1} ∪ {Is [j] | 1 ≤ s ≤ t − 1, 1 ≤ j ≤ rs } 6 if It [i] ∈ A then Ot [i] ← π(It [i]) R 7 else [Ot [i] ← {0, 1}n ; π(It [i]) ← Ot [i]] 8 if i < rt then It [i + 1] ← Ot [i] ⊕ Mt [i + 1] ¯ t (Mt ) ← Ot [1] ⊕ · · · ⊕ Ot [rt ] 9 O ¯ s (Ms ) | 1 ≤ s ≤ t − 1} 10 B ← {O ¯ 11 If Ot (Mt ) ∈ B then unusual ← true R 12 M ACt ← {0, 1}n ¯ t (Mt )) ← M ACt 13 π (O 14 Mπ,π (Mt ) ← the leftmost l-bit of M ACt 15 Return Mπ,π (Mt ) By the equation (3.5) we have that Pr(UNUSUAL) ≤ Pr(unusual = true in Game 2) +
σ2 + q2 . 2n+1
(3.6)
In Game 2 the value Mπ,π (Mt ) returned in response to a query Mt is a random l-bit string. Thus we can first select these M ACt values in Game 2. This does not change the view of the adversary that interacts with the game
164
Dowon Hong et al.
and the probability that unusual is set to true. This modified game is called Game 3, and it is depicted as follows. Game 3: Modification of Game 2 1 unusual ← false; for all x ∈ {0, 1}n do π(x) ← undefined, π (x) ← undefined 2 When A makes its t-th query, Mt = Mt [1] · · · Mt [rt ] where t ∈ {1, · · · , q} R 3 M ACt ← {0, 1}n 4 Mπ,π (Mt ) ← the leftmost l-bit of M ACt 5 Return Mπ,π (Mt ) 6 When A is done making its q queries 7 For t = 1, · · · , q do 8 It [1] ← Mt [1] 9 For i = 1, · · · , rt do 10 A ← {It [j] | 1 ≤ j ≤ i − 1} ∪ {Is [j] | 1 ≤ s ≤ t − 1, 1 ≤ j ≤ rs } 11 if It [i] ∈ A then Ot [i] ← π(It [i]) R 12 else [Ot [i] ← {0, 1}n ; π(It [i]) ← Ot [i]] 13 if i < rt then It [i + 1] ← Ot [i] ⊕ Mt [i + 1] ¯ t (Mt ) ← Ot [1] ⊕ · · · ⊕ Ot [rt ] 14 O ¯ s (Ms ) | 1 ≤ s ≤ t − 1} 15 B ← {O ¯ 16 If Ot (Mt ) ∈ B then unusual ← true ¯ t (Mt )) ← M ACt 17 π (O We note that Pr(unusual = true in Game 3) = Pr(unusual = true in Game 2) .
(3.7)
Now we want to show that the probability of unusual = true in Game 3, over the random M ACt values selected at line 3 and the random Ot [i] values selected at line 12, is small. In fact, we will show something stronger: even if one arbitrarily fixes the values of M AC1 , · · · , M ACq ∈ {0, 1}n , the probability that unusual will be set to true is still small. Since the oracle answers have now been fixed and the adversary is deterministic, the queries M1 , · · · , Mq that the adversary will make have likewise been fixed. The new game is called Game 4(C). It depends on constants C = (q, M AC1 , · · · , M ACq , M1 , · · · , Mq ). Game 4(C) 1 unusual ← false; for all x ∈ {0, 1}n do π(x) ← undefined, π (x) ← undefined 2 For t = 1, · · · , q do 3 It [1] ← Mt [1] 4 For i = 1, · · · , rt do 5 A ← {It [j] | 1 ≤ j ≤ i − 1} ∪ {Is [j] | 1 ≤ s ≤ t − 1, 1 ≤ j ≤ rs } 6 if It [i] ∈ A then Ot [i] ← π(It [i]) R 7 else [Ot [i] ← {0, 1}n ; π(It [i]) ← Ot [i]] 8 if i < rt then It [i + 1] ← Ot [i] ⊕ Mt [i + 1] ¯ t (Mt ) ← Ot [1] ⊕ · · · ⊕ Ot [rt ] 9 O ¯ s (Ms ) | 1 ≤ s ≤ t − 1} 10 B ← {O ¯ t (Mt ) ∈ B then unusual ← true 11 If O ¯ t (Mt )) ← M ACt 12 π (O
A Concrete Security Analysis for 3GPP-MAC
165
We know that Pr(unusual = true in Game 3)] ≤ max{Pr(unusual = true in Game 4(C))} . C
(3.8)
Thus, by (3.4) and (3.6)-(3.8) we have that Rn∗→l AdvM (A) ≤ max{Pr(unusual = true in Game 4(C))} + Pn ×Pn C
σ2 + q2 , (3.9) 2n+1
where, if A is limited to q queries of aggregate length σ, then C specifies q, message strings M1 , · · · , Mq of aggregate block length σ, and M AC1 , · · · , M ACq ∈ {0, 1}n . Finally, we modify Game 4(C) by changing the order of choosing a random Ot [i] in line 7. This game is called Game 5(C). Game 5(C) 1 unusual ← false; for all x ∈ {0, 1}n do π(x) ← undefined, π (x) ← undefined 2 For t = 1, · · · , q do 3 For i = 1, · · · , rt do R 4 It [1] ← Mt [1] ; Ot [i] ← {0, 1}n 5 A ← {It [j] | 1 ≤ j ≤ i − 1} ∪ {Is [j] | 1 ≤ s ≤ t − 1, 1 ≤ j ≤ rs } 6 if It [i] ∈ A then Ot [i] ← π(It [i]) 7 else π(It [i]) ← Ot [i] 8 if i < rt then It [i + 1] ← Ot [i] ⊕ Mt [i + 1] ¯ t (Mt ) ← Ot [1] ⊕ · · · ⊕ Ot [rt ] 9 O ¯ s (Ms ) | 1 ≤ s ≤ t − 1} 10 B ← {O ¯ 11 If Ot (Mt ) ∈ B then unusual ← true ¯ t (Mt )) ← 0n 12 π (O Notice that in Game 5, we choose a random Ot [i] value in line 4. To avoid that the ¯ t (Mt )) to some particular game depends on the M ACt -values, we also set π (O n value, 0 , instead of to M ACt in the last line. The particular value associated to this point is not used unless unusual has already been set to true. Thus we obtain that Pr(unusual = true in Game 4(C)) = Pr(unusual = true in Game 5(C)) .
(3.10)
The coins used in Game 5 are O1 (M1 ) = O1 [1] · · · O1 [r1 ], · · · , Oq (Mq ) = Oq [1] · · · Oq [rq ], where either Os [i]’s are random coins or are a synonym Ou [j]. Here we set Ot [0] = 0 and It [k] ← Ot [k − 1] ⊕ Mt [k] for 1 ≤ t ≤ q and 1 ≤ k ≤ rt , and if there exists the smallest number u < s such that Is [i] = Iu [j] then Os [i] = Ou [j], else if there exists the smallest number j < i such that Is [i] = Is [j] then Os [i] = Os [j], else Os [i] is a random coin. Run Game 5 on M1 , · · · , Mq and the indicated vector of coins. Suppose that unusual gets set to true on this execution. Let s ∈ {1, · · · , q} be the particular value of t when unusual first get set to true. Then
166
Dowon Hong et al.
¯ u (Mu ) for some u ∈ {1, · · · , s − 1} . ¯ s (Ms ) = O O In this case, if we had run Game 5 using coins Ou and Os and restricting the execution of line 2 to t ∈ {u, s}, then unusual still would have been set to true. In this restricted Game 5, we get ¯ s (Ms ) = O ¯ u (Mu ) = Pr (Os [1] ⊕ · · · ⊕ Os [rs ] = Ou [1] ⊕ · · · ⊕ Ou [ru ]) Pr O = 2−n ¯ u (Mu ) is a random string in {0, 1}n . Thus we obtain that because Ou [1] in O max {Pr(unusual ← true in Game 5(C))} C ≤ max 2−n r1 ,··· ,r q 1≤u<s≤q σ= ri q(q − 1) 1 ≤ · n 2 2 q2 ≤ n+1 . 2
(3.11)
Combining (3.9)-(3.11), we get that Rn∗→l (A) ≤ AdvM Pn ×Pn
σ2 + q2 q2 + . 2n+1 2n+1
This completes the proof of Lemma 1. Now we check the possibility of distinguishing a random function in the original MPn from a random function in MPn ×Pn . To obtain the result, we first need to define what inner collisions are in MPn and MPn ×Pn . Definition 1 Let M1 , · · · , Mq be q strings in {0, 1}n∗ , and let π, π be two random permutations in Pn . We say that there occurs an inner collision of Mπ on the queries M1 , · · · , Mq if the collision occurs before invoking the second permutation π ¯ which is derived from π. Namely, if there exists a pair of indices ¯ π (Mi ) = O ¯ π (Mj ). Similarly, we say that there exists 1 ≤ i < j ≤ q for which O an inner collision of Mπ,π on the queries M1 , · · · , Mq if the collision occurs before invoking the second permutation π . Lemma 2 Let A be an adversary that makes queries to a random function chosen either from MPn or from MPn ×Pn . Suppose that A asks its oracle q queries, these queries having aggregate length of σ blocks. Then M
AdvMPPnn ×Pn (A) ≤
σ 2 + 2q 2 . 2n+1
Proof. Let ICol(MPn ) be the event that there is an inner collision among the messages in MPn and let ICol(MPn ×Pn ) be the event that there is an inner
A Concrete Security Analysis for 3GPP-MAC
167
collision among the messages in MPn ×Pn . Observe that since both algorithms are the same before invoking the second permutation, the inner collision probabilities in both algorithms are the same. Thus the following equation holds: R R (3.12) Pr ICol(MPn ) | π ← Pn = Pr ICol(MPn ×Pn ) | π, π ← Pn . For the same reason, if no inner collisions occur, the adversary outputs 1 with the same probability for MPn and MPn ×Pn because she sees outputs of permutations on distinct points and the second permutations of MPn and MPn ×Pn are independent. Let Pr1 (·) denote the probability that AMπ outputs 1 under R the experiment π ← Pn and Pr2 (·) denote the probability that AMπ,π outputs R 1 under the experiment π, π ← Pn . Then the following holds: Pr1 AMπ = 1 | ICol(MPn ) = Pr2 AMπ,π = 1 | ICol(MPn ×Pn ) , (3.13) where ICol(MPn ) and ICol(MPn ×Pn ) are complements of ICol(MPn ) and ICol(MPn ×Pn ), respectively. Therefore, by using the equation (3.12) and (3.13), we can write the adversary’s advantage as follows: M
AdvMPPnn ×Pn (A) ≤ Pr1 AMπ = 1) − Pr2 (AMπ,π = 1 = Pr1 AMπ = 1|ICol(MPn ) · Pr1 (ICol(MPn )) + Pr1 AMπ = 1|ICol(MPn ) · Pr1 ICol(MPn ) − Pr2 AMπ,π = 1|ICol(MPn ×Pn ) · Pr2 (ICol(MPn ×Pn )) − Pr2 AMπ,π = 1|ICol(MPn ×Pn ) · Pr2 ICol(MPn ×Pn ) = |Pr2 (ICol(MPn ×Pn )) · Pr1 AMπ = 1|ICol(MPn ) − Pr2 AMπ,π = 1|ICol(MPn ×Pn ) ≤ Pr2 (ICol(MPn ×Pn )) . To bound this quantity, we consider again the proof of Lemma 1. In the proof of Lemma 1, Game 1 perfectly simulates the behavior of MPn ×Pn . Observe that when an inner collision occurs in MPn ×Pn , the flag unusual is set to true in Game 1. Thus by Lemma 1, we obtain that Pr2 (ICol(MPn ×Pn )) ≤ Pr (unusual = true in Game 1) σ 2 + 2q 2 ≤ , 2n+1 which completes the proof of Lemma 2. Proof of Theorem 1: From Lemma 1 and 2, the proof of Theorem 1 follows immediately.
168
4
Dowon Hong et al.
Conclusion
In this work we have examined the provable security of the 3GPP-MAC algorithm f 9. We have provided a proof of security for 3GPP-MAC in the sense of reduction-based cryptography. More specifically, we have shown that if there is an existential forgery attack on 3GPP-MAC, then the underlying block cipher can be attacked with comparable parameters. It might be seen as highly unlikely that a realistic attack exists on the 3GPP block cipher KASUMI. If that is indeed the case, our results establish the soundness of the 3GPP-MAC algorithm.
References 1. M. Bellare, J. Kilian, and P. Rogaway, The security of cipher block chaining, Crypto’94, LNCS 839, Springer-Verlag, 1994, pp. 341-358. An updated version can be found in the personal URLs of the authors. See, for example, http://wwwcse.ucsd.edu/users/mihir/. 2. A. Berendschot et al., Integrity Primitives for Secure Information Systems. Final Report of RACE Integrity Primitives Evaluation (RIPE-RACE 1040), LNCS 1007, Springer-Verlag, 1995 3. J. Black and P. Rogaway, CBC-MACs for arbitrary-length messages: the three-key constructions, Crypto 2000, LNCS 1880, Springer-Verlag, 2000, pp. 197-215. 4. J. Black and P. Rogaway, A Block-Cipher Mode of Operation for Parallelizable Message Authentication, EUROCRYPT 2002, LNCS 2332, Springer-Verlag, 2002, pp. 384-397. 5. L. Carter and M. Wegman, Universal hash functions, J. of Computer and System Sciences, 18, 1979, pp. 143-154 6. V. Gligor and P. Donescu, Fast encryption and authentication: XCBC encryption and XECB authentication modes, Contribution to NIST, April 20, 2001. Available at http://csrc.nist.gov/encryption/modes/. ´ Jaulmes, A. Joux, and F. Valette, On the security of Randomized CBC-MAC 7. E. Beyond the Birthday Paradox Limit: A New Construction, FSE 2002, LNCS 2365, Springer-Verlag, 2002, pp. 237-251. 8. K. Kurosawa and T. Iwata, TMAC: Two-Key CBC-MAC, Contribution to NIST, June 21, 2002. Available at http://csrc.nist.gov/encryption/modes/. 9. J. Kang, S. Shin, D. Hong and O. Yi, Provable security of KASUMI and 3GPP encryption mode f 8, ASIACRYPT 2001, LNCS 2248, Springer-Verlag, 2001, pp. 255271. 10. J. Kang, O. Yi, D. Hong, and H. Cho, Pseudorandomness of MISTY-type transformations and the block cipher KASUMI, ACISP 2001, LNCS 2119, Springer-Verlag, 2001, pp. 60-73. 11. L. R. Knudsen and C. J. Mitchell, Analysis of 3gpp-MAC and two-key 3gpp-MAC, Discrete Applied Mathematics, to appear. 12. L. Knudsen, Analysis of RMAC, Contribution to NIST, November 10, 2002. Available at http://csrc.nist.gov/CryptoToolkit/modes/comments/. 13. T. Kohno, Related-Key and Key-Collision Attacks Against RMAC, Contribution to NIST, 2002. Available at http://csrc.nist.gov/CryptoToolkit/modes/comments/. 14. J. Lloyd, An Analysis of RMAC, Contribution to NIST, November 18, 2002. Available at http://csrc.nist.gov/CryptoToolkit/modes/comments/.
A Concrete Security Analysis for 3GPP-MAC
169
15. M. Luby and C. Rackoff, How to construct pseudorandom permutations and pseudorandom functions, SIAM J. Comput., 17, 1988, pp. 189-203. 16. E. Petrank, C. Rackoff, CBC-MAC for Real-Time Data Source, J. of Cryptology, 13, 2000, pp. 315-338. 17. B. Preneel, P.C. van Oorschot, MDx-MAC and building fast MACs from hash functions, Crypto’95, LNCS 963, Springer-Verlag, 1995, pp. 1–14. 18. P. Rogaway, PMAC: A parallelizable message authentication code, Contribution to NIST, April 17, 2001. Available at http://csrc.nist.gov/encryption/modes/. 19. P. Rogaway, Comments on NIST’s RMAC Proposal, Contribution to NIST, December 2, 2002. Available at http://csrc.nist.gov/CryptoToolkit/modes/comments/. 20. M. Wegman and L. Carter, New hash functions and their use in authentication and set equality, J. of Computer and System Sciences, 22, 1981, pp. 265-279. 21. 3GPP TR 33.909, Report on the evaluation of 3GPP standard confidentiality and integrity algorithms, V1.0.0, 2002-12. 22. 3GPP TS 35.201 Specification of the 3GPP confidentiality and integrity algorithm; Document 1: f8 and f9 specifications. 23. ISO/IEC 9797-1:1999(E) Information technology - Security techniques - Message Authentication Codes(MACs) - Part 1. 24. http://csrc.nist.gov/encryption/modes/
New Attacks against Standardized MACs Antoine Joux1 , Guillaume Poupard1 , and Jacques Stern2 1 DCSSI Crypto Lab 51 Boulevard de La Tour-Maubourg 75700 Paris 07 SP, France {Antoine.Joux,Guillaume.Poupard}@m4x.org 2 D´epartement d’Informatique Ecole normale sup´erieure 45 rue d’Ulm, 75230 Paris Cedex 05, France
[email protected] Abstract. In this paper, we revisit the security of several message authentication code (MAC) algorithms based on block ciphers, when instantiated with 64-bit block ciphers such as DES. We essentially focus on algorithms that were proposed in the norm ISO/IEC 9797–1. We consider both forgery attacks and key recovery attacks. Our results improve upon the previously known attacks and show that all algorithms but one proposed in this norm can be broken by obtaining at most about 233 MACs of chosen messages and performing an exhaustive search of the same order as the search for a single DES key.
1
Introduction
Message authentication codes (MACs) are secret-key cryptographic primitives designed to ensure the integrity of messages sent between users sharing common keys. MAC algorithms are often based on block ciphers or hash functions. Among the MAC algorithms based on block cipher, the CBC-MAC construction is probably the best known and studied. Initially proposed in [2], it has been studied in many papers, both from the cryptanalytic point of view [12] and from the security point of view [3]. It is well known that this algorithm suffers from birthday paradox based weaknesses, and this fact is reflected both in the known attacks and in the security proofs for this mode of operation of block ciphers. Of course, with 64bit block ciphers, the birthday paradox hits as soon as 232 messages have been authenticated with the same key. With high speed networks and high bandwidth applications, this is clearly not enough. In order to reach higher security, it is possible to use block ciphers with larger blocks, such as the AES, or more complicated MAC mechanisms secure beyond the birthday paradox, such as for example RMAC [11]. However, in many real life applications, developers still use variants of the CBC-MAC such as the algorithms described in ISO/IEC 9797–1 [7]. Of course, none of these algorithms has a known security proof that holds beyond the T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 170–181, 2003. c International Association for Cryptologic Research 2003
New Attacks against Standardized MACs
171
birthday paradox barrier. Moreover, most of them have known forgery attack with 232 known or chosen messages. Yet, these forgery attacks are often seen as academic and impractical, and most developers only care about key recovery attacks. As long as finding the keys requires an exhaustive search of two or more DES keys simultaneously, i.e. as long as the 56-bit DES keys cannot be found one by one, the algorithm is deemed secure. According to this point of view, some algorithms in the norm are still insecure, but others are considered as secure against key recovery attacks. More precisely, according to the informative annex B of ISO/IEC 9797–1, among the 6 MAC algorithms proposed in this standard, the first 3 algorithms have known efficient key recovery attacks, while the 3 others are considered to be secure in that sense. However, Coppersmith, Knudsen and Mitchell [6, 5] have proved that, whatever the padding method may be, the fourth algorithm is also insecure. In this paper, we show that the fifth algorithm can also be cryptanalysed with efficient key recovery attacks. This algorithm consists in two parallel computations of CBC-MAC. As a consequence, both for efficiency and for security reasons, it is much preferable to use a classical CBC-MAC with retail using a 128-bit block cipher such as AES and, if needed, to truncate the MAC value to 64 bits. We also describe generic key recovery attacks against any MAC based on a single internal CBC chain.
2
Description of CBC–MAC and Some Variants
In this section, we give a general description of MAC algorithm based on a single CBC chain with a key K and quite general and arbitrary initial and final keyed transformation. For the sake of simplicity, we assume that the initial computation I is applied to the first block of message and yields the initial value of the CBC chain which starts at the second block. We also assume that the final computation F is applied to the final value in the CBC chain and results in the MAC tag m. Since all MAC algorithms based on a single CBC chain we are aware of, are of this type, we do not lose much generality. Furthermore, this formalism is also used in ISO/IEC 9797–1. More precisely, for a message M1 , M2 ,. . . , M the computation works as follows (see also figure 1): – Let C1 = I(M1 ). – For i from 2 to , let Ci = EK (Ci−1 ⊕ Mi ). – Let the MAC tag m be F (C ). In order to fully specify a MAC algorithm of this type, it suffices to give explicit descriptions of I and F . The simplest example of a plain CBC-MAC occurs when I is defined as EK and F is the identity function. ISO/IEC 9797–1 standard defines 6 MAC algorithms. The first 4 ones are defined in the following table:
172
Antoine Joux, Guillaume Poupard, and Jacques Stern
Algorithm ISO/IEC 9797–1 1 2 3 4
initial final reference transformation transformation EK ∅ [7, 10, 2] EK E K1 [7] EK EK ◦ DK1 [7, 1] E K2 ◦ E K E K1 [7, 9]
The MAC algorithm 5 is defined as the exclusive-or of two MAC values computed using algorithm 1 with different keys. The MAC algorithm 6 is similar but uses the exclusive-or of two MACs computed with algorithm 4. Even if the algorithms of [7] are defined with up to 6 secret keys, it is advised to derive them from only one or two keys. In the following, we propose attacks that do not try to take advantage of such key derivation technique. It should also be noticed that ISO/IEC 9797–1 defines a complete MAC algorithm by specifying which padding should be used and if a final truncation is applied to the result. Our attacks immediately apply to standard paddings but we do not consider messages that include the bit length as a first block (padding 3 of [7]).
M1 ?
I C1
M2
M3
M
? - j
? - j
? - j
?
?
?
K -E C2
K -E C3
K -E ?
C
F ?
MAC(M ) Fig. 1. Generic CBC MAC algorithm.
3 3.1
Overview of Classical Attacks Birthday Paradox Forgery of Any Generic CBC Mac Algorithm
Let us assume that the final transformation F of a generic CBC MAC algorithm is a permutation. Then, we observe MAC tags of known messages. If we denote
New Attacks against Standardized MACs
173
Algorithm complexity(∗) reference algorithm 1 [256 ,1,0,0] algorithms 2 and 3 [3 × 256 ,232 ,0,0] algorithm 4 [4 × 256 ,232 ,2,0] [6] with paddings 1 et 2 or [4 × 256 ,1,1,256 ] algorithm 4 [4 × 256 ,0,264 ,264 ] [5] with padding 3 or [8 × 256 ,2 × 232 ,3 × 248 ,0] (∗) an [a, b, c, d] attack requires – – – –
a off-line block cipher encipherments, b known data string/MAC pairs, c chosen data string/MAC pairs, d on-line MAC verifications.
Fig. 2. Complexity of key recovery attacks against ISO/IEC 9797–1 MAC algorithms.
by n the block size (which is also the size of the MAC tag), according to the birthday paradox, the observation of O(2n/2 ) MAC tags allows to find a collision, i.e., two different messages M and M with the same MAC. For a 64-bit block cipher such as DES or triple-DES, this means that a collision occurs after the computation of only about 232 MAC tags. More precisely, we note the blocks of messages M and M in the following way M = (M1 , M2 , . . . , M1 , N1 , . . . , N2 ) M = (M1 , M2 , . . . , M , N1 . . . , N2 ) 1
with M1 =
M . 1
Then, it is easy to check that, for any blocks N1 , . . . N , 2
MAC(M1 , M2 , . . . , M1 , N1 , . . . , N ) 2
=
MAC(M1 , M2 , . . . M , N1 , . . . , N ). 1 2
Consequently, we obtain a forgery attack where the query of the MAC tag of a message enables to compute the (same) MAC tag of a different message. Furthermore, using the same notations, we can also notice the following identity that will be used in the sequel: MAC(M1 , M2 , . . . , M1 −1 , X, N1 , . . . , N ) 2
=
MAC(M1 , M2 , . . . , M −1 , X 1
⊕ M1 ⊕
M , N1 , . . . , N ). 1 2
In conclusion, independently of the key size and of the complexity of initial and final transformations I and F , CBC MAC algorithms are vulnerable to forgery attacks if the block size is too small. However, such attacks are often seen as academic and impractical by developers. That is why, in the following, we mainly focus on key recovery attacks. 3.2
Attacking Algorithm 1 from ISO/IEC 9797–1
The first and simplest MAC algorithm, that has also been standardized by NIST [10], can be attacked in many different ways. Used with the simple DES
174
Antoine Joux, Guillaume Poupard, and Jacques Stern
block cipher, the knowledge of one MAC tag allows, through an exhaustive search on the key K, to recover this key. Furthermore, the algorithm does not have any final “retail” so it is easy to obtain valid MAC tags of concatenated messages using a so-called “xor-forgery”. 3.3
Attacking Algorithms 2 and 3 from ISO/IEC 9797–1
If the initial transformation is a single application of the block cipher, as in algorithms 2 and 3, and if the final transformation is a permutation, the observation of a collision allows to recover the key K as in the case of algorithm 1. The attack [12] goes as follows. Observe MAC tags of known messages of at least two blocks until a collision occurs. Using the birthday paradox, such an event should appear after the observation of O(2n/2 ) messages, where n is the block size. Since F is a permutation, a collision is also present at the end of the CBC chain so we obtain a test for the exhaustive search on K. When using DES, we need the observation of 232 MAC values for known messages followed by an exhaustive search of a single DES key. Then, the key K1 used in F can be recovered by another exhaustive search.
4
Devising Some Tools
Throughout this section, we assume that n denotes the block size used in the MAC algorithms, or equivalently in the basic block cipher E we rely on. We are mostly interested in the case were n is 64 bits. The attacks presented in this paper mostly rely on techniques that allows us to learn the exclusive-or of two intermediate values present in two of the core CBC chains. 4.1
Exclusive-Or of Intermediate Values in CBC Chains
We first remind a technique of Coppersmith and Mitchell [6]. We assume that we are given any generic message authentication code algorithm based on a single CBC computation chain with block cipher E and key K, as defined in section 2. Clearly, by fixing the last blocks of the message, we transform the final computation into a function of the final output of the CBC chain. In many cases, such as algorithms 1 to 4 of ISO/IEC 9797–1 without final truncation, this function is in fact a permutation and we are able to learn whether the outputs of two different chains are equal or not. Note that this hypothesis is not essential. Indeed, if the output function is not a permutation, it suffices to observe the output of several computation pairs done with different final blocks. When the output of the two chains are equal, all pairs will contain two identical values whatever the final transformation may be. As a consequence, we may ignore the final computation and assume that we can learn whether the output of two chains are equal or not. Let M and N be two messages of respective length M and N . Let CM denote the final value of the CBC chain computed for message M and DN denote the final value of chain computed for message N . Now form a message
New Attacks against Standardized MACs
175
M (T ) by adding a single block T , right at the end of the CBC chain for message M . Likewise, add a single block U at the end of N and form N (U ) . Writing down the equations of the two CBC chains, we find that the final value for messages M (T ) and N (U ) are respectively: (T )
CM +1 = EK (CM ⊕ T ), (U )
DN +1 = EK (DN ⊕ U ). Given 2n/2 different messages M (T ) and 2n/2 different N (U ) along with their MAC, we find a MAC collision with high probability. Since EK is a permutation, such a collision implies that: CM ⊕ T = DN ⊕ U. As a consequence, we learn the value of CM ⊕ DN , namely T ⊕ U . Note that by structuring our choices for T and U , we can find a collision with probability 1. One possible choice is to fix the n/2 high order bits of T and let the n/2 remaining bits of T cover the 2n/2 possible choices. Similarly, we fix the low order bits of U and let the high order bits cover the possible choices. Using the technique presented in this section, requires the MAC computation of 21+n/2 chosen messages, i.e., 233 chosen messages for 64-bit blocks. 4.2
Multiple Exclusive-Or of Intermediate Values in CBC Chains
We now explain how to efficiently obtain a large number of exclusive-ors of intermediate values in CBC chains, following ideas initially proposed in [5]. This useful tool will be applied in section 6.2 to attack general MAC algorithms based on CBC chains. We first introduce a notation; for any message M formed with blocks M1 , M2 , . . . M , we denote by Internal(M, i) the intermediate value of the CBC chain at position i Internal(M, i) = EK (EK (EK (I(M1 ) ⊕ M2 ) ⊕ M3 ) . . . ⊕ Mi ) We consider a set of 2αn unknown intermediate values Xj . For each such value, we build a set Sj of 2βn MAC tags where Xj is the penultimate intermediate value. Formally, this means that Xj = Internal(M [j], i[j]) for a fixed message M [j] and an index i[j] smaller than the block length of M [j]. Then we choose 2βn blocks Tk and that we query the MAC tags of messages (M [j]1 , . . . , M [j]i[j] , Tk ), for the 2βn values of k, in order to build the set Sj . Next, we compare the values of the sets Sj . If the same MAC tag appears in both Sa and Sb , we learn the exclusive-or of Xa and Xb , exactly as in the previous section. The probability to obtain Xa ⊕ Xb for fixed indexes a and b is about 2βn × 2βn /2n . When both Xa ⊕ Xb and Xb ⊕ Xc are known, Xa ⊕ Xc can be easily deduced. In order to construct many exclusive-ors, we make a graph whose vertices are the 2αn unknown values Xj and where an edge links two vertices with known
176
Antoine Joux, Guillaume Poupard, and Jacques Stern
M2
M3
M
? -E
? - j
? - j
? - j
?
?
?
?
M1 K
K1 - E
K -E
K -E
K -E
C1
C2
C3
C
? K2 E ?
MAC(M ) Fig. 3. Algorithm 4 from ISO/IEC 9797–1.
exclusive-or obtained by collision of related sets Sj . When a path exists in this graph from Xa to Xb , Xa ⊕ Xb can be computed. We claim that this graph behaves like a random graph and well known results [4, 8] on such graphs say that as soon as the number of edges is larger than the number of vertices, a “giant component” (with size linear in the total number of vertices) appears. More precisely, with s vertices and cs/2 randomly placed edges, with c > 1, there is a single giant component whose size is almost exactly (1 − t(c))s, where [4] ∞
t(x) =
1 k k−1 (ce−c ) c k!
k
k=1
Since the number of edges is about 2(2α+2β−1)n , we obtain that, if α + 2β is larger than 1, the exclusive-or of all pairs of intermediate values in the giant component can be learned. Moreover, with a number of edges larger than the number of vertices, the giant component covers with probability more than 79% of all vertices. Furthermore, the smallest path between most pairs of vertices in the giant component is logarithmic (i.e. linear in n). As a consequence, we can efficiently learn the exclusive-or of a fixed proportion, say one half, of the vertices in time O(n × 2αn ). As a conclusion, we obtain the exclusive-or of O(2αn ) intermediate values asking O(2(α+β)n ) MAC tags, with α + 2β ≥ 1.
5
Advanced Attacks against Algorithm 4 from ISO/IEC 9797–1
Let us recall that in this MAC algorithm, the final value of the CBC chain is encrypted by a final application of the block cipher, with a specific key K2
New Attacks against Standardized MACs
177
(see figure 3). Coppersmith and Mitchell [6] have proposed the following attack against this algorithm when padding methods 1 or 2 are used. Notice that even if padding method 3 is preferred, variants based on multiple exclusive-or computation can be applied [5]. The attack goes as follows. Observe MAC tags of messages of at least two blocks until a collision occurs. Let us note M and N such messages of respective length M and N and common MAC tag mcoll . Since the block cipher EK2 is a permutation, using the notation of section 4.1, we obtain CM = DN . Consequently, EK (CM −1 ⊕ MM ) = EK (DN −1 ⊕ NN ) and CM −1 ⊕ MM = DN −1 ⊕ NN , so CM −1 ⊕ DN −1 = MM ⊕ NN Finally, query the MAC tags mM and mN of the two truncated messages M1 ...MM −1 and N1 ...NN −1 . Since mM = EK2 (CM −1 ) and mN = EK2 (DN −1 ), we obtain −1 −1 EK (mM ) ⊕ EK (mN ) = MM ⊕ NN . 2 2
Then K2 is found by an exhaustive search. We expect that a single value will remain for K2 , when the key size is no larger than the block size. Once K2 is known, we can recover K through a second exhaustive search using the test −1 −1 EK (EK (mM ) ⊕ MM ) = EK (mcoll ). 2 2
Finally, recovering K1 can be done with a final exhaustive search. This attack requires the observation of 2n/2 MAC values for known messages, the computation of two MAC values for chosen messages and finally the independent exhaustive searches on a K2 , K and K1 . When using the DES, we need 232 known messages and 2 chosen messages followed by an exhaustive search about four time as expensive as a simple exhaustive search on a single DES key.
6
New Attacks
6.1
Attacking Algorithm 5 from ISO/IEC 9797–1
Let us recall that in this MAC algorithm, each message goes through two independent plain CBC chain with keys1 in K1 and K2 (see figure 4). The MAC tag is the exclusive-or of the final values of the two chains. We use a variation on the technique from subsection 4.1 to attack this algorithm. The first step of the attack is to find a collision between the MAC tags of two (short, i.e., one block) messages M and N . Let m denote the common MAC tag of M and N . Moreover, let EK1 (M ), EK2 (M ), EK1 (N ) and EK2 (N ) denote the final values of the 4 CBC chains involved. We have m = EK1 (M ) ⊕ EK2 (M ) = EK1 (N ) ⊕ EK2 (N ). 1
In algorithm 5 from ISO/IEC 9797–1, the keys K1 and K2 are derived from a single key K.
M1
? K1 E
M2
M3
M
? - j
? - j
? - j
? K1 E
? K1 E
? K1 E
C1
C2
C3
M1
M2
M3
M
? - j
? - j
? - j
? K2 E
? K2 E
? K2 E
? K2 E
D1
D2
D3
C
? j-MAC(M ) 6
D
Fig. 4. Algorithm 5 from ISO/IEC 9797–1.
We would like to learn the value δ = EK1 (M ) ⊕ EK1 (N ) = EK2 (M ) ⊕ EK2 (N ). This can be done by computing the MAC values of two long lists of messages M (T ) formed by adding a single block T to M and N (U ) by adding a block U to N . It is easy to check that whenever T ⊕ U = δ, we get a collision for both of the CBC chains and of course a collision on the MAC values of the extended messages. Moreover, this kind of double collisions can be distinguished from “ordinary” collisions. Indeed, if we add any block, say the zero block, at the end of both M (T ) and N (U ) the resulting messages still collide. Once δ, M (T ) and N (U ) are known, we can proceed either with a forgery attack or a key recovery attack. It should be noted that for this particular algorithm, no efficient forgery attack was previously known. Forgery attack. With a double collision between M (T ) and N (U ) in hand, making forgery is easy. Indeed, for any message completion L, the MAC value of M (T ) concatenated with L and the MAC value of N (U ) concatenated with L are necessarily equal. Indeed, the double collision propagates along L. Thus, it is easy to ask the MAC tag of one of the two extended messages and to guess that the MAC tag of the other extended message has the same value. The cost forgery attack is independent of the size of the keys of E, it is only a function of the block size n. The attack requires the computation of 21+n/2 MAC values. Key recovery attack. Since we know the value of δ, we know the two values EK1 (M ) ⊕ EK1 (N ) and EK2 (M ) ⊕ EK2 (N ) (both are equal to δ). Thus, we get two independent conditions respectively on K1 and K2 , and the keys can be
New Attacks against Standardized MACs
179
recovered with a simple exhaustive search. Indeed, assume that M and N are different one block messages, then search for a key K that satisfy: EK (M1 ) ⊕ EK (N1 ) = δ. When the key size is no larger than the block size, we expect to find two different solutions during the exhaustive search, K1 and K2 . Furthermore, if there are more than two candidate solutions, we simply form all possible candidate pairs and keep the pair which is compatible with all the MAC values we already know. The key recovery attack requires the observation of 2n/2 MAC tags followed by the computation of 21+n/2 MAC values. When using the DES, we need 233 message followed by an exhaustive search roughly equivalent to a simple exhaustive search on a single DES key. 6.2
General MAC Algorithms with a Single CBC Chain
In this subsection, we consider key recovery attacks against general MAC algorithm based on a single CBC chain with a key K. We let I and F denote the initial and final transforms as in section 2. In this subsection, our goal is to recover the key K of the main CBC chain as efficiently as possible. We assume that I and F are both keyed transformations which cannot be computed by the attacker since it (at first) does not know any key material. We first address the special case where I and F are closely related transformations (with identical keys) before considering the general case. The special case I = F ◦ EK . For example, a natural MAC scheme could apply the same triple-DES transformation, both at the beginning and at the end of the MAC computation. With our notations, this means that I = EK ◦ DK1 ◦ EK and F = EK ◦ DK1 . None of the previously described attacks apply to such a scheme. Let us consider 2αn blocks Mi and the associated internal value Xi = Internal(Mi , 1) = I(Mi ). In order to learn I(Mi ), we first remark that ∆I (Mi ) = I(Mi ) ⊕ I(Mi ⊕ 1) can be seen as an identifier for Mi . Of course, this identifier is somewhat ambiguous, since ∆I (Mi ) and ∆I (Mi ⊕ 1) are identical. However, given any value for the identifier, it has only a few associated values (unless I is almost linear, in which case simpler attacks are available). With this in mind, we can apply the technique of subsection 4.2 that computes ∆I (Mi ) for O(2αn ) blocks Mi asking 2(α+β)n MAC tags (with α + 2β larger than 1). Then, we ask for the MAC tags of pairs of messages whose only difference is that the last blocks are respectively Tj and Tj ⊕ 1. We denote by Yj the intermediate value after the exclusive-or with Tj , i.e., the input of the final transformation F ◦ EK = I. Consequently, when the last block is Tj ⊕ 1, the input of I is Yj ⊕ 1. So, the exclusive-or of queried MAC tags reveals ∆I (Yj ).
180
Antoine Joux, Guillaume Poupard, and Jacques Stern
Finally, if we compute 2(1−α)n values ∆I (Yj ), we obtain, with high probability, a collision with one of the ∆I (Mi ). As we already explained, ∆I is a good identifier so we probably have Mi = Yj . However, there might be false alarms, i.e., apparent collision not resulting from a real one. False alarms are easy to detect by computing a few additional MAC values. Given a real collision, we know that I(Mi ) is equal to the MAC tag whose last intermediate value is Yj . Thus, since we have learned the initial value of the CBC chain, we can compute K through exhaustive search, as we previously explained. This attack requires a total of approximately 2(α+β)n + 2(1−α)n MAC computations for chosen messages, with α + 2β ≥ 1. The best compromise is obtained with α = β = 1/3 and the number of queried MAC tags is about 22n/3 . When using the DES, we need 243 messages followed by an exhaustive search roughly equivalent to a simple exhaustive search on a single DES key. The general case with arbitrary I and F . We finally explain that, even if I and F are arbitrary transformations, the internal key K can still be attacked, even if the complexity is less practical than in previous cases. Always using the technique of section 4.2 we can consider 2αn intermediate values Xi which are unknown but whose pairwise exclusive-or are known. This requires the query of 2(α+1)n/2 MAC tags of chosen messages. Then, for each intermediate value Xi , the technique of section 4.1 allows to compute ∆i = EK (Xi ) ⊕ EK (Xi ⊕ 1) asking 2n/2 MAC computation for each Xi . With this list of ∆i in mind, we know guess a key K and a block X, and we compute ∆ = EK (X) ⊕ EK (X ⊕ 1). If K = K and X is one of the Xi s, ∆ is in the list of the ∆i s. We do not know the related Xi value but we know δ = Xi ⊕ Xj for any other j. Consequently, we learn the following test EK (X ⊕ δ) ⊕ EK (X ⊕ δ ⊕ 1) = ∆j . This allows to know if we have really guessed the correct key K or if it is only a false alarm. The probability to correctly guess K = K and that X is one of the Xi s is about 1 over 2k × 2(1−α)n , where k is the key size of K. The total number of MAC queries is 2(α+1)n/2 + 2n/2 and the complexity of the search on K and X is O(2k+(1−α)n ). According to the choice of α, we obtain different compromises. The main ones are: α parameter number of MAC queries search complexity α=0 O(2n/2+1 ) O(2k+n ) 3n/4 α = 1/2 O(2 ) O(2k+n/2 ) n α=1 O(2 ) O(2k )
7
Conclusion
The main conclusion of this paper is that the use of MAC algorithms based on an internal CBC chain using a weak block cipher such as DES must be
New Attacks against Standardized MACs
181
carefully reconsidered, whatever the initial and final transformation may be. A much more secure approach is to use a strong block cipher such as AES with a provably secure MAC algorithm.
Acknowledgments We would like to thank the anonymous referees for pointing out important references.
References 1. ANSIX9.19, American National Standard–Financial institution retail message authentication, 1986. 2. ANSIX9.9, American National Standard–Financial institution message authentication (wholesale), 1982. Revised in 1986. 3. M. Bellare, J. Kilian, and P. Rogaway. The Security of the Cipher Block Chaining Message Authentication Code. In Crypto ’94, LNCS 839, pages 362–399. SpringerVerlag, 1994. 4. B. Bollob´ as. Random Graphs. Academic Press, New York, 1985. 5. D. Coppersmith, L.R. Knudsen, and C.J. Mitchell. Key recovery and forgery attacks on the MacDES MAC algorithm. In Crypto 2000, LNCS 1880, pages 184–196. Springer-Verlag, 2000. 6. D. Coppersmith and C.J. Mitchell. Attacks on MacDES MAC algorithm. Electronic Letters, 35:1626–1627, 1999. 7. ISO/IEC 9797–1, Information technology–Security techniques–Message Authentication Codes (MACs)–Part 1: Mechanisms using a block cipher, 1999. 8. S. Janson, T. L uczak, and A. Ruci´ nski. Random Graphs. John Wiley, New York, 1999. 9. L.R. Knudsen and B. Preneel. MacDES: MAC algorithm based on DES. Electronic Letters, 34:871–873, 1998. 10. NIST. Computer Data Authentication, may 1985. Federal Information Processing Standards PUBlication 113. 11. NIST. Recommendation for Block Cipher Modes of Operation: The RMAC Authentication Mode, november 2002. NIST Special Publication 800-38B. 12. B. Preneel and P.C. van Oorschot. On the security of iterated Message Authentication Codes. IEEE Transactions on Information Theory, 45(1):188–199, January 1999.
Analysis of RMAC Lars R. Knudsen1 and Tadayoshi Kohno2 1
Department of Mathematics, Technical University of Denmark
[email protected] 2 Department of Computer Science and Engineering, University of California at San Diego
[email protected] Abstract. In this paper the newly proposed RMAC system is analysed. The scheme allows a (traditional MAC) attack some control over one of two keys of the underlying block cipher and makes it possible to mount several related-key attacks on RMAC. First, an efficient attack on RMAC when used with triple-DES is presented, which rely also on other findings in the proposed draft standard. Second, a generic attack on RMAC is presented which can be used to find one of the two keys in the system faster than by an exhaustive search. Third, related-key attacks on RMAC in a multi-user setting are presented. In addition to beating the claimed security bounds in NIST’s RMAC proposal, this work suggests that, as a general principle, one may wish to avoid designing modes of operation that use related keys.
1
Introduction
RMAC [6, 2] is an authentication system based on a block cipher. The block cipher algorithms currently approved to be used in RMAC are the AES and triple-DES. RMAC is based on a block cipher with b-bit blocks and k-bit keys. RMAC takes as inputs: a message D of an arbitrary number of bits, two keys K1, K2 each of k bits and a salt R of r bits, where r ≤ k. It produces an m-bit MAC value, where m ≤ b. The method is as follows (see also Figure 1). First pad D with a 1 bit followed by enough 0 bits to ensure that the length of the resulting string is a multiple of b. Encrypt the padded string using the block cipher in CBC mode using the key K1. The last ciphertext block is then encrypted with the key K3 = K2 + R where ‘+’ is addition modulo 2. The resulting ciphertext is then truncated to m bits to form the MAC. The two keys K1, K2 may be generated from one k-bit key in a standard way [6]. There are five parameter sets in [6] for each of two block sizes. Parameter Set b = 128 b = 64 (r, m) (r, m) I (0, 32) (0, 32) II (0, 64) (64, 64) III (16, 80) n/a IV (64, 96) n/a V (128, 128) n/a T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 182–191, 2003. c International Association for Cryptologic Research 2003
Analysis of RMAC
183
Fig. 1. The RMAC algorithm with keys K1, K2 on input a padded string D1 D2 · · · Dn and salt R. E is the underlying block cipher and ‘+’ denotes addition modulo 2. The resulting MAC is M . We assume for illustrative purposes m = b and k = r.
In Appendix A of [6] it is noted that for RMAC with two independent keys K1 and K2 an exhaustive search for the keys is expected to require the generation of 22k−1 MACs, where k is the size of one key. However, for the cases with m = b this can be done much faster under a chosen message attack with just one known message and one chosen message. Independently of how the two keys are generated, an exhaustive search for the key K2 requires only an expected number of 2k decryptions of the block cipher [5]. Given a message D and the MAC using the salt R, request the MAC of D again. With a high probability this MAC is computed with a salt R , such that R = R. For these two MACs, the values just before the final encryption will be equal and K2 can be found after about 2k decryption operations. Subsequently, K1 can be found in roughly the same time. The rest of this paper is organized as follows. In §2 an attack on RMAC used with three-key triple-DES is presented. The attack finds all three DES keys in time roughly that of three times an exhaustive search for a DES key using only a few MACs. §3 presents an attack on RMAC used with any block cipher. The attack finds one of the two keys in the system faster than by an exhaustive search. In §4 we present a construction-level related-key attack against RMAC and in §5 we describe some ways to exploit the related-key attack of §4 when attacking multiple users.
2
Attack on RMAC with Three-Key Triple DES
One of the block cipher algorithms approved to be used in RMAC is tripleDES with 168-bit keys. Consider RMAC with parameter set II, that is with 64-bit MACs and a 64-bit salt. The key for the final encryption is then K3 = K2 + (R0104 ). However, it is not specified in [6] how the three DES keys are
184
Lars R. Knudsen and Tadayoshi Kohno
derived from K3. Assume that the first DES key is taken as the rightmost 56 bits of K2 + (R0104 ), the second DES as the middle 56 bits, and the third DES as the leftmost 56 bits. Assume an attacker is given two MACs of the same message D but using two different values, R and R of the salt. Assume that the rightmost eight bits of both R and R are equal. Then the encryption of the last same block for the two MACs is done using triple-DES where for one MAC the key used is (a, b, c), and where for the other MAC the key used is (a, b, c ⊕ d). Since the attacker knows d, he can decrypt through a single DES operation, find c in 256 operations and derive one of the three DES keys[3]. This attack has a probability of success of 2−8 . If the attack fails, it is repeated for other values of D, R, and/or R . After the third DES key has been found, it is possible to find the second DES key with similar complexity. Note that eight bits of the salt affect the second DES key. Request the MAC of a message D2 using two different values of the salt. Decrypt through the final DES component with the third DES key. With a probability of 1 − 2−8 the two second DES keys in the final encryption will be different as a result of different salt values. Since the salts are known by the attacker, one finds the second DES in about 256 operations. Subsequently, the final DES key can be found using 256 MAC verifications [4] as follows. Assume one is given the MACs, M1 and M2 , of two different messages D1 and D2 , each consisting of an arbitrary number of bits. Let P1 and P2 be the padding bits used in the respective MAC computations. Request the MAC, M3 , of the message D1 P1 E, where E is a one-block message. Let x1 , x2 and x3 be the values just before the final triple DES encryptions in the computations of M1 , M2 and M3 . Given the value of the final single-DES key of K2 one can compute also the MAC of the message D2 P2 (E ⊕ x1 ⊕ x2 ). Note that the value just before the final triple DES encryptions in this case is x3 . Also note that the attacker has full control over the key bits which are modified using the (random) salts. Therefore this last part of the attack works regardless of how the salts are chosen, as long as the attacker knows them. In total, with 2 known and 1 chosen MAC, one finds the third DES key of K2 using 256 MAC verifications or alternatively using 256 chosen messages.
3
A Generic Attack
In this section we present an attack on the RMAC system with parameter set II for b = 64 and RMAC with parameter set V for b = 128. The attack finds the value of K2 after which RMAC reduces to a simple CBC-MAC for which it is well-known that simple forgeries can be found. In the following, let dK (x) denote the decryption of x using the key K for the underlying block cipher. The attack is based on multiple collisions. Definition 1. A t-collision for a MAC is a set of t messages all producing the same MAC value. We shall make use of the following lemma which is easily proved. Lemma 1. Let A, B, and C be boolean variables. Then A⇒B
⇔ not(B) ⇒ not(A), and
Analysis of RMAC
185
A ⇒ (B AND C) ⇔ not(B) OR not(C) ⇒ not(A). Let D be some message (with an arbitrary no. of blocks). Then the MAC of D, MACK1,K2 (D, R), is the last block from the CBC-encryption using K1, encrypted once again using the key K2 + R, where R is the salt. The attack goes as follows. Request the MACs of D for s different values of the salt R. Assume that the attacker finds a t-collision, where the salts are R0 , . . . , Rt−1 and denote the common MAC value by M . For simplicity denote K2 + R0 by K, and K2 + Ri by K + ai−1 for i = 1, . . . , t − 1. The attacker guesses a key value L and computes the decryptions of the MAC value M using the keys L, L + a0 , . . . , L + at−1 . Then it holds for i = 0, . . . , t − 1, that if L = K or L = K + ai then dL (M ) = dL+ai (M ). Using Lemma 1 one gets that if dL (M ) = dL+ai (M ) then L = K and L = K + ai for 0 ≤ i < t. Similarly, if dL+ai (M ) = dL+aj (M ) then L = K + ai + aj for 0 ≤ i = j < t. In this way an exhaustive search for K2 can be made faster than brute-force. In some rare cases one gets equal values in the inequality tests. As an example, if dL (M ) = dL+ai (M ) for some i, then one needs to check if dL (M ) = dL+a0 (M ) = dL+a1 (M ) = ... after which all false alarms are ext−1 pected to be detected. The expected number of false alarms is t + . 2 Let us show the case of a 3-collision in more details. Assume that the random numbers, the salts used, are R0 , R1 , and R2 (which are known to the attacker). Since the messages are the same for all MACs and since the MACs are equal, say M , one knows that the keys K2 + R0 , K2 + R1 , and K2 + R2 all decrypt M to the same (unknown) message z, thus dK (M ) = dK+a0 (M ) = dK+a1 (M ), where K = K2 + R0 , a0 = R0 + R1 and a1 = R0 + R2 . The following implications are immediate. L=K
⇒ dL (M ) = dL+a0 (M ) dL+a0 (M ) = dL+a1 (M )
AND
L = K + a0
⇒ dL+a0 (M ) = dL (M ) dL (M ) = dL+a0 +a1 (M )
AND
L = K + a1
⇒ dL+a1 (M ) = dL+a0 +a1 (M ) AND dL+a1 (M ) = dL (M )
L = K + a0 + a1 ⇒ dL+a0 +a1 (M ) = dL+a1 (M ) AND dL+a1 (M ) = dL+a0 (M ) Lemma 1 enables us to rewrite the above implications as follows. dL (M ) = dL+a0 (M )
⇒ L = K
dL+a0 (M ) = dL (M )
⇒ L = K + a0
dL+a1 (M ) = dL (M )
⇒ L = K + a1
dL+a1 (M ) = dL+a0 (M ) ⇒ L = K + a0 + a1
186
Lars R. Knudsen and Tadayoshi Kohno
t 3 4 5 6 7 8 9 10 17
Table 1. t−1 u=t+ 2 4 7 11 16 22 29 37 46 136
u/t 1.3 1.8 2.2 2.7 3.1 3.6 4.1 4.6 8.0
Take (guess) a key value, L and compute dL (M ), dL+a0 (M ), and dL+a1 (M ). If dL (M ) = dL+a0 (M ), then L = K and L = K + a0 , if dL+a0 (M ) = dL+a1 (M ), then L = K +a0 +a1 , and if dL (M ) = dL+a1 (M ), then L = K +a1 . Summing up, with a 3-collision (provided a0 , a1 are different) one can check the values of four keys from three decryption operations. Let us next assume that there is a 4-collision. Let the four keys in the 4-collision be K, K + a0 , K + a1 , K + a2 . Then from the results of dL (M ), dL+a0 (M ), dL+a1 (M ), and dL+a2 (M ), one can check the validity of four keys. Moreover, by arguments similar to the case of a 3-collision, from the four decryptions, one can check the values of all keys of the form K + ai + aj , where 3 0 ≤ i = j ≤ 2. Thus from four decryption operations one can check 4 + =7 2 keys. This generalizes to the following result. With a t-collision one can check the t−1 values of u = t + keys from t decryption operations. Table 1 lists values 2 of t, u and u/t. It should be clear that t-collisions can be used to reduce a search for the key K2, one question is by how much. How many values of L need to be tested before the sets of keys {L, L + a0 , . . . , L + at−1 , L + a0 + a1 , . . . , L + at−2 + at−1 } cover the entire key space? Consider the case t = 3. One can assume a0 = a1 (otherwise there is no collision), and that with a high probability there are two bit positions where a0 = a1 . Without loss of generality assume that these are the two most significant bits and that these bits are “01” for a0 and “10” for a1 . Then a strategy is the following: Let L run through all keys where the most significant two bits are “00”. Then clearly the sets {L, L + a0 , L + a1 , L + a0 + a1 } cover the entire key space and an exhaustive search for K2 is reduced by a factor of 43 , since in the attack one can check the value of four keys at the cost of three decryptions.
Analysis of RMAC
187
Consider the case t = 4. With a high probability the b-bit vectors a0 , a1 , and a2 are pairwise different. Also, with a high probability there are three bit positions where a0 , a1 , and a2 are linearly independent (viewed as three-bit vectors). Without loss of generality assume that the bits are the three most significant bits and that these are “001” for a0 , “010” for a1 and “100” for a2 . Then a strategy is the following: Let L run through all keys where the most significant three bits are “000”. Then clearly the sets {L, L + a0 , L + a1 , L + a2 , L + a0 + a1 , L + a0 + a2 , L + a1 + a2 } cover 7/8 of the key space. Next fix the most significant three bits of L to “111”, find other bit positions where a0 , a1 , and a2 are different and repeat the strategy. Thus, in the first phase of the attack one chooses 2b−3 values of L, does 4×2b−3 = 2b−1 encryptions, and one can check 7×2b−3 keys. In the next phase of the attack one chooses 2b−6 values of L, does 4 × 2b−6 = 2b−4 encryptions, and one can check 7×2b−6 keys. At this point, a total of 7×2b−3 +7×2b−6 = 2b −2b−3 −2b−6 keys have been checked at the cost of about 2b−1 + 2b−4 encryptions. In total, an exhaustive search for K2 is reduced by a factor of almost two. For higher values of t the attacker’s strategy becomes more complex. We claim that with a high probability (“good” values of ai ) the factor saved in an exhaustive search for the key is close to the value of u/t (see Table 1). The following result shows the complexity of finding t-collisions [7]. Lemma 2. Consider a set of s randomly chosen b-bit values. With s = c2(t−1)b/t one expects to get one t-collision, where c ≈ (t!)1/t . If it is assumed for a fixed message D and a (randomly chosen) salt R that the resulting MAC is a random m-bit value, one can apply the Lemma to estimate the number of texts needed to find a t-collision. Consider a few examples. With s = 2(b+1)/2 one expects to get one pair of colliding MACs, that is, one (2-)collision. With s = (1.8)22b/3 one expects to get a 3-collision, that is, three MACs with equal values (61/3 ≈ 1.8). With s = (2.2)23b/4 one expects to get√one 4-collision (241/4 ≈ 2.2). From Stirling’s formula n! = 2πn(n/e)n (1 + Θ( n1 )), one gets that (t!)1/t ≈ t/e for large t. Thus, with s = (t/e)2(t−1)b/t one expects to get a t-collision. Table 2 lists the complexities of finding t-collisions depending on the block size b. There are many variants of this attack depending on how many chosen texts the attacker has access to. Table 3 lists the complexities of some instantiations of the attacks, where for triple-DES the number of chosen texts has been chosen to be less than 264 (since the salt can be a maximum of 64 bits) and for AES the time complexity and the number of chosen texts needed have been made comparable. In both cases an exhaustive search for the key has been reduced by a factor of eight, so the correct value of the key can be expected trying half of that number of values. As a final remark, note that the message D in the attack need not be chosen nor known by the attacker. Therefore one can argue that this attack is stronger than a traditional “chosen-text” attack.
188
Lars R. Knudsen and Tadayoshi Kohno Table 2. The estimated number of texts needed to find a t-collision. t #texts needed b = 64 b = 128 3 244 286 49 4 2 297 53 5 2 2104 6 255 2108 7 257 2112 58 8 2 2114 59 9 2 2116 10 260 2118 63 17 2 2123
Table 3. Expected running times and chosen texts of attacks finding K2 of RMAC. Algorithm k 3-DES AES
4
b
Parameter t Expected # chosen sets running time texts 112 64 II 12 2108 263 124 128 128 V 20 2 2123
Construction-Level Related-Key Attacks
Another consequence of adding the salt to K2 is that it exposes the RMAC system to a construction-level related-key attack. Consider the RMAC system with parameter set II for b = 64 and RMAC with parameter set III, IV, or V for b = 128. Let K1, K2 and K1, K2 be two pairs of RMAC keys that are related by the difference K2 + K2 = X0k−r for some r-bit string X. If D is some message, then MACK1,K2 (D, R) = MACK1,K2 (D, R + (X0k−r )) with probability 1. An attacker can use this property to, for example, take a message MACed by one user (with keys K1, K2), change the salt by adding X0k−r , and then trick the second user (with related keys K1, K2 ) to accept the new MAC–salt pair as an authenticator for D.
5
Key-Collision Attacks
Even if an attacker cannot control or does not (a priori ) know the difference between multiple users’ keys, an attacker can still exploit the related-key attack in §4. Consider RMAC with parameter set II for b = 64 and parameter set V for b = 128. Assume k = r (if r < k then treat the bits of K2 not affected by the salt as part of K1). Let us start by assuming that we have two users who share the first key K1 but whose second keys K2 and K2 have some unknown relationship. To mount the construction-level related-key attack from §4 the attacker must first learn the relationship between K2 and K2 . One way to learn this difference would be to first force each user to MAC some fixed message 2k/2 times. Let Ri be the i-th salt used by the first user and let Mi be the i-th MAC. Let Ri be the i-th salt used by the second user and let Mi be the i-th MAC.
Analysis of RMAC
189
If K2 + Ri = K2 + Rj for any indices i, j, then we have a key-collision for the key to the last block cipher application and Mi = Mj with probability 1. The attacker cannot observe the values K2+Ri directly, but if he sees a collision Mi = Mj , then he guesses that the difference between K2 and K2 is Ri + Rj . Once this difference is known, the attacker can modify the MACs generated with K1, K2 to be valid MACs for K1, K2 . We expect to observe one collision Mi = Mj due to the key collision K2 + Ri = K2 + Rj , and we expect 2k−m collisions Mi = Mj at random, but recall that we are assuming that k = m. Note that if Mi = Mj occurs at random but K2 + Ri = K2 + Rj , then with very high probability an attacker’s subsequent forgery attempt will fail, and this is how we filter the signal from the noise. Now consider a group of 2k/2 users, each with independently-selected random keys, and assume that the adversary forces each user to MAC some fixed message 2k/2 times. Note that, given a group of users this size, we expect two users to share the same first key K1 and, by the above discussion, we expect one collision K2 + Ri = K2 + Rj for this pair of users. By looking for collisions Mi = Mj across different users, an attacker can guess the relationship between two users’ keys, and thereby force a user to accept a message that wasn’t MACed with its keys. Unfortunately, this attack against 2k/2 users has a much lower signal-tonoise ratio than the attack against two users who are known to share the first key K1. In particular, we expect approximately 22k−m collisions Mi = Mj at random. We filter the signal from the noise as before. The filtering step does not significantly slow down the attack since the attacker must already force 2k/2 users to each MAC 2k/2 messages and since we are assuming that k = m. As a concrete example, for AES with 128-bit keys, this attack works by forcing 264 users to each MAC some message 264 times. We expect 2128 collisions in the MAC outputs and one of those collisions will allow an adversary to take the messages MACed by one user and submit them as MACs to another user. Another way to exploit the related-key property of §4 is based on the keycollision technique of [1]. For this attack let n denote the number of users an attacker is attacking, and let n , q, q be additional parameters. The attack begins by the attacker picking keys L1u , L2u for u ∈ {1, . . . , n } (these keys correspond to “fake” users; these keys do not have to be random, but we assume that each L1u is distinct). Then, for each u, the attacker MACs some fixed message D q u times; let M i be the i-th MAC produced using keys L1u , L2u , and let Ri be the i-th salt value. We assume that each Ri is distinct, but not necessarily random. Now assume that the attacker has each real user, indexed from 1 to n, MAC the message D q times, and let Miv be the i-th MAC produced by the v-th user, and let Riv be the i-th salt value for the v-th user (here we assume that all the salt values are chosen uniformly at random). Let K1v , K2v denote the keys of the v-th real user. If nn ≥ 2k and qq ≥ 2k , we expect at least one collision of the form L1u = K1v and L2u + Ri = K2v + Rjv to occur and, when this u occurs, M i = Mjv . If an adversary sees a collision of this form, it will learn both K1v and K2v . We do, however, expect approximately nn qq 2−m collisions
190
Lars R. Knudsen and Tadayoshi Kohno
u
M i = Mjv at random. This time, since we are guessing both RMAC keys, we can filter by recomputing the MAC of different messages using the key guess. (As an aside, note that the basic (total) key-collision attack approach of [1] would require nn ≥ 22k .) We can instantiate this attack in different ways. If n = n = q = q = 2k/2 , then we get a key recovery attack (against one of the 2k/2 users) with resources similar to our previous attack against 2k/2 users. If n = 1, n = 2k , q = q = 2k/2 , then we get an attack against a single user that uses 2k/2 chosen-plaintexts and approximately 23k/2 steps. As a concrete example, if we consider AES with 128bit keys, then the first instantiation attacks one of 264 users using 264 chosenplaintexts per user, and 2128 offline RMAC computations (broken down into 264 standard CBC-MAC computations and 2128 final RMAC block cipher applications). The attack also requires approximately 2128 additional block cipher applications as part of the filtering step. The latter instantiation attacks 1 user using 264 chosen-plaintexts and approximately 2192 offline RMAC computations (broken down into 2128 standard CBC-MAC computations and 2192 final RMAC block cipher applications). The filtering phase requires an additional 2128 block cipher computations. As an additional note, we point out that the cost of the offline computations can be amortized across multiple attacks, thereby reducing the cost per attack.
6
Conclusions
There are several conclusions to draw from this work. The first and most obvious conclusion is that RMAC fails to satisfy the security claims in [6]. In particular, although NIST [6] claims that a key-recovery attack should require generating 22k−1 MACs, we have presented a number of ways to extract RMAC keys using much less work. We believe, however, that there are more important lessons to be learned from this research. First, our results suggest that one needs to be extremely careful when using and interpreting “provable security” results. What is being proven? And what assumptions are being made? In the case of RMAC we note that the proof of security is in the ideal cipher model. This is an extremely strong model and, unfortunately, not a good model for use with some popular block ciphers. For example, consider the attack against RMAC with triple-DES in §2. The attack in §2 worked because triple-DES is vulnerable to relatedkey attacks, whereas in the ideal cipher model there is no relationship between the permutations associated with different keys (each key corresponds to an independently selected random permutation). This suggests that the ideal cipher model is not a good model to use when designing a mode of operation. Our results also show that, even when the underlying block cipher is secure against related-key attacks, interesting interactions can occur if a mode of operation uses related keys. For example, the attack in §3 reduces the search space of an exhaustive search attack by exploiting the fact that RMAC uses related keys. The construction-level related-key property in §4 also exists because RMAC uses related keys. And §5 shows the key-collision attacks become more serious when
Analysis of RMAC
191
a mode of operation uses a large number of related keys. These attacks further support our recommendation that modes of operation should not use related keys.
Acknowledgments We thank David Wagner for pointing out the relationship between the attacks in §5 and Biham’s paper [1]. Tadayoshi Kohno was supported by a National Defense Science and Engineering Graduate Fellowship.
References 1. E. Biham. How to decrypt or even substitute DES-encrypted messages in 228 steps. Information Processing Letters, 84, 2002. 2. E. Jaulmes, A. Joux, and F. Valette. On the security of randomized CBC-MAC beyond the birthday paradox limit: A new construction. In J. Daemen and V. Rijmen, editors, Fast Software Encryption 2002. Springer-Verlag, 2002. 3. J. Kelsey, B. Schneier, and D. Wagner. Key-schedule cryptanalysis of IDEA, GDES, GOST, SAFER, and triple-DES. In Neal Koblitz, editor, Advances in Cryptology: CRYPTO’96, LNCS 1109, pages 237–251. Springer Verlag, 1996. 4. L.R. Knudsen and B. Preneel. MacDES: a new MAC algorithm based on DES. Electronics Letters, April 1998, Vol. 34, No. 9, pages 871–873. 5. Chris Mitchell. Private communication. 6. NIST. DRAFT Recommendation for Block Cipher Modes of Operation: the RMAC Authentication Mode. NIST Special Publication 800-38B. October 18, 2002. 7. R. Rivest and A. Shamir. Payword and Micromint: Two simple micropayment schemes. Cryptobytes, 2(1):7–11, 1996.
A Generic Protection against High-Order Differential Power Analysis Mehdi-Laurent Akkar and Louis Goubin Cryptography Research, Schlumberger Smart Cards 36-38 rue de la Princesse, BP 45, F-78430 Louveciennes Cedex, France {makkar,lgoubin}@slb.com
Abstract. Differential Power Analysis (DPA) on smart-cards was introduced by Paul Kocher [11] in 1998. Since, many countermeasures have been introduced to protect cryptographic algorithms from DPA attacks. Unfortunately these features are known not to be efficient against high order DPA (even of second order). In these paper we will first describe new specialized first order attack and remind how are working high order DPA attacks. Then we will show how these attacks can be applied to two usual actual countermeasures. Eventually we will present a method of protection (and apply it to the DES) which seems to be secure against any order DPA type attacks. The figures of a real implementation of this method will be given too. Keywords: Smart-cards, DES, Power analysis, High-Order DPA.
1
Introduction
The framework of Differential Power Analysis (also known as DPA) was introduced by P. Kocher, B. Jun and J. Jaffe in 1998 ([11]) and subsequently published in 1999 ([12]). The initial focus was on symmetrical cryptosystems such as DES (see [11, 14, 1]) and the AES candidates (see [3, 4, 7]), but public key cryptosystems have since also been shown to be also vulnerable to the DPA attacks (see [15, 6, 9, 10, 16]). Two main families of countermeasures against DPA are known: – In [9, 10], L. Goubin and J. Patarin described a generic countermeasure consisting in splitting all the intermediate variables, using the secret sharing principle. This duplication method was also proposed shortly after by S. Chari et al. in [4] and [5]. – In [2], M.-L. Akkar and C. Giraud introduced the transformed masking method , an alternative countermeasure to the DPA. The basic idea is to perform all the computation such that all the data are XORed with a random mask. Moreover, the tables (e.g. the DES S-Boxes) are modified such that the output of a round is masked by the same mask as the input. Both these methods have been proven secure against the initial DPA attacks, and are now widely used in real life implementations of many algorithms. T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 192–205, 2003. c International Association for Cryptologic Research 2003
A Generic Protection against High-Order Differential Power Analysis
193
However, they do not take into consideration more elaborated attacks called “High-order DPA”. These attacks, as described in [11] by P. Kocher or in [13] by T. Messerges, consist in studying correlations between the secret data and several points of the electric consumption curves (instead of single points for the basic DPA attack). In what follows, we study the impact of the High-order DPA attacks on both countermeasures mentioned above. Moreover, we describe new secure ways of implementing a whole class of algorithms (including DES) against these new attacks. The paper is organized as follows: – In section 2, we recall three basic notions: the (high-order) differential power analysis, the duplication method and the transformed masking method. – In Section 3, we study “duplication method” and show that an implementation of DES (or AES), which splits all the variables into n sub-variables is still vulnerable to an n-th order DPA attack. Section 3.1 gives the general principle of the attack and section 3.2 discusses practical aspects. – Section 4 is devoted to the analysis of the “transformed masking” (see [2]). For such an implementation of DES, section 4.1 describes how a “second order” DPA can work. A new variant we call the “superposition attack” is also presented. In section 4.2, we show that an AES (=Rijndael) implementation protected with the “transformed masking” method can also be attacked, either by second order DPA, or by the “Zero problem” attack. – Section 5 presents our new generic countermeasure: the “unique masking method”. We illustrate it on the particular case of DES. In 5.1, we explain the main idea of “unique mask”. In 5.2, we apply it to the full protection of a DES implementation. The security of this implementation against n-th order DPA attacks is investigated in sections 5.3 and 5.4. – Section 6 focuses on the problem of securely constructing the modified SBoxes used in our new countermeasure. The details of the algorithm are presented, together with practical impacts on the amount of time and memory needed. – In Section 7, we give our conclusions.
2 2.1
Background The (High-Order) Differential Power Analysis
In basic DPA attack (see [11, 12], or [8]), also known as first-order DPA (or DPA when there is no risk of confusion), the attacker records the power consumption signals and compute statistical properties of the signal for each individual instant of the computation. This attack does not require any knowledge about the individual electric consumption of each instruction, nor about the position in time of each of these instructions. It only relies on the following fundamental hypothesis (quoted from [10]):
194
Mehdi-Laurent Akkar and Louis Goubin
Fundamental Hypothesis (Order 1): There exists an intermediate variable, that appears during the computation of the algorithm, such that knowing a few key bits (in practice less than 32 bits) allows us to decide whether two inputs (respectively two outputs) give or not the same value for this variable. In this paper, we consider the so-called High-Order Differential Power Analysis attacks (HODPA), which generalize the first-order DPA: the attacker now compute statistical correlations between the electrical consumptions considered at several instants. More precisely, an “n-th order” DPA attack takes into account n values of the consumption signal, which correspond to n intermediate values occurring during the computation. These attacks now rely on the following fundamental hypothesis (in the spirit of [10]): Fundamental Hypothesis (Order n): There exists a set of n intermediate variables, that appear during the computation of the algorithm, such that knowing a few key bits (in practice less than 32 bits) allows us to decide whether two inputs (respectively two outputs) give or not the same value for a known function of these n variables. 2.2
The “Duplication” Method
The “duplication method” was initially suggested by L. Goubin and J. Patarin in [9], and studied further in [4, 10, 5]. It basically consists in splitting the data (manipulated during the computation) into several parts, using a secret sharing scheme, and computing a modified algorithm on each part to recombine the final result at the end. For example, a way of splitting X into two parts can consist in choosing a random R and splitting X into (X ⊕ R) and R. 2.3
The “Transformed Masking” Method
The “Transformed Masking” Method was introduced in [2] by M.-L. Akkar and C. Giraud. The basic idea is to perform all the computation that all the data are XORed with a random mask. By using suitably modified tables (for instance SBoxes in the case of DES), it is possible to have the output of a round masked by exactly the same mask that masked the input. The computation is thus divided into two main steps: the first one consists in generating the modified tables and the second one consists in applying the usual computation using these modified tables (the initial input being masked before starting the computation and the final output being unmasked after the computation).
3 3.1
Attack on the Duplication Method Example: Second Order DPA on DES
In what follows, we suppose that two bits b1 and b2 , appearing during the computation, are such that b1 ⊕ b2 equals the value b of the first output of the first S-Box in the first DES round. The attacker performs the following steps:
A Generic Protection against High-Order Differential Power Analysis
195
1. Record the consumption curves Ci corresponding to N different inputs Ei (1 ≤ i ≤ N ). For instance N = 1000. 2. The attacker guesses the interval δ between the instant corresponding to the treatment of b1 and the instant corresponding to the treatment of b2 . Each curve Ci is then replaced by Ci,δ , which is the difference between Ci and (Ci translated by δ). He then computes the mean curve CMδ of the N curves Ci,δ . 3. The attacker guesses the 6 bits of the key on which the value of b depends. From these 6 key bits, he computes for each Ei the expected value for b. he then computes the mean curve CM’δ of all the Ci,δ such that the expected b equals 0, and CM”δ the mean curve of all the other Ci,δ 4. If CM’δ and CM”δ do not show any appreciable difference, go back to 3 with another choice for the 6 key bits. 5. If no choice for the 6 key bits was satisfactory, go back to 2, with another choice for δ. 6. Iterate the steps 2, 3, 4, 5 with two bits whose “exclusive-or” comes from the second S-Box, the third S-Box, ..., until the eighth S-Box. 7. Find the 8 remaining key bits by exhaustive search. 3.2
The Attack in Practice
As specified in the original paper [10], it is clear that the n-th duplication is vulnerable to an n-th order DPA attack. An important point is to notice that if the method is not carefully implemented, it will be easily detected on the consumption curve, just by identifying n repetitive parts in the calculus. In this case, it would be easy for the attacker to just superpose the different parts of the curves (in a constant, or proportional to log(n), time, but not exponential in n). Moreover, in certain scenarios, the attacker has full access to the very details of the implementation. In particular, for high-level security certifications (ITSEC, Common Criteria), it is assumed that the attacker knows the contents of the smartcard ROM.
4
Attack on the Transformed Masking Method
4.1 DES: Second Order DPA 4.1.1 Usual Second Order DPA: For the DES algorithm, the input of a round is masked with a 64 bits value R = R0−31 ||R32−63 divided in two independent masks of 32 bits each. The modified S-boxes S’ are the following (where S are the original ones). S (X) = S(X ⊕ EP (R32−63 )) ⊕ P −1 (R0−31 ⊕ R32−64 ) Where EP represents the Expansion Permutation, and P −1 the inverse of the P permutation after the S-Boxes. We can see that using this formula the output mask of the value at the end of a DES round is nearly R. To get exactly the R masked value, the left part of the value has to be remasked with R0−31 ⊕ R32−64 .
196
Mehdi-Laurent Akkar and Louis Goubin
It is clear, like noticed in the article, than this countermeasure is subject to a second-order DPA attack. Indeed, the real output of the S-boxes is correlated to the masked value and the value R ; so getting the electrical trace of these two values one can combine them and get a trace on which will work a classical DPA attack. In order to perform efficiently such an attack, without need of n2 point like in the general attack, the attacker should get precise information about the implementation of the algorithm: he should know precisely where the interesting values are manipulated. 4.1.2 Superposition Attack: In this section we will present a new kind of DPA attack. In theory it is a second-order DPA attack; but in practice it is nearly as simple as an usual DPA attack. The idea is the following: in a second order DPA the most difficult thing is to localize the time where the precise needed values are manipulated. On the contrary localizing a whole DES round is often quite easy. So instead of correlating precise part of the consumption traces we will just correlate the whole trace of the first and the last round. With these method one can notices than at one moment we will have the trace consumption T of the following value which is the output S-Boxes values: T = (S (E(R15 ) ⊕ K16 ) ⊕ R ) ⊕ (S (E(R1 ) ⊕ K1 ) ⊕ R ) = S (E(R15 ) ⊕ K16 ) ⊕ S (E(R1 ) ⊕ K1 ) Where R is the right part of the mask permutated by the expansion permutation. One can notice that the T value does not depend of the random masking value and than R1 and R15 1 are often known. Considering this, it is easy to sea that performing a guess on the 2 × 6 bits of the subkey of the first and last round, it is possible to guess the XORed value of the output of the S-Boxes of the first and last round. After that once can perform an usual DPA-type attack attacker and find the values of the different sub-keys of K1 and K16 . Due to redundancy of the key-bits one can moreover check the coherency of the results: indeed with such an attack one will find 2 × 6 × 8 = 96 >> 56 bits for the key. The detailed algorithm is the following: – Correlate (usually an addition or subtraction of the curves) the first and last round traces. – For All the messages M, For the S-box j = 1..8 – For k=0 to 63, For l=0 to 63 – Separate the Messages , considering one bit of the XOR values of the output of the j th Sbox (round 1 and 16) for the message M considering that the subkey of the S-Box j of the first round is k, and the subkey of the S-Box j of the last round is l. – Average and subtract the separated curves. – Choose the value k, l where the greatest peak appear. – Check the coherency of the keybits found. 1
R15 can be deduced from the output applying the inverse of the final permutation.
A Generic Protection against High-Order Differential Power Analysis
197
A cautionary look of the attack could convince the reader that any error of one bit on the guess of K1 or K16 eliminate all the correlation. Comparing to an usual second order DPA attack, even if this attack require the analyze of 212 = 4096 possibilities, it has the advantage not to need a precise knowing of the code. And from a complexity point of view it increases by a constant factor (26 = 64) the amount of time and memory needed for the attacker and not by a linear factor. 4.1.3 Conclusion: The superposition attack, even if it is a theoretical second order attack is very efficient in practice. Therefore to use transformed masking method, one must use different masks at each step of the algorithm. This idea have been developed and adapted to produce the protection described in this article. 4.2
AES
For the AES, the countermeasure is nearly the same than in DES. The only difference is that no transformed tables are used for the non-linear part of the AES (the inversion in the field GF(256)) but the same table with a multiplicative mask. The distributivity of the multiplication over XOR (addition in the field) is used. So from an additive mask it is easy, without unmasking the value, to switch to a multiplicative one, to go through the Sboxes and to get back to an the mask. 4.2.1 Usual Second Order DPA: For AES it is exactly the same than in the DES transformed masking method. Correlating the masked value and the mask allow an effective attack against this method. 4.2.2 The “Zero” Problem: Because a multiplicative mask is used during the inversion, one can see that if the inverted value is zero -and this value just depend of 8 bits of the key in the first and last round- then whatever is the masking value, the inverted value will be unmasked. Therefore if someone is able to detect in the consumption trace that the value is zero instead of a random masked value, one will be able to break such an implementation. Of course probabilistic tools such as variance analysis are devoted to such analysis. 4.2.3 Superposition Method: As in the DES, one can say that using the same superposition method it would be possible to find the key 16 bits by 16 bits superposing the first and last round of AES because these are using the same mask. Unfortunately after the last round a last subkey is added to the output of the round. So the attacker need at least to guess 8 more bits of the key. It increase the attacker amount of work to 24 bits for each Sbox. In theory it is not a quadratic attack in the number of samples but in practice it is not so easy to perform more than 16 billions manipulation of the curve for each tables and each message.
198
Mehdi-Laurent Akkar and Louis Goubin
4.2.4 Conclusion: Judging by these attacks we can consider that the adaptive mask countermeasure on AES is not efficient even against some simpler attack than second order ones.
5
Unique Masking Method Principle
We have seen that the actual countermeasure against DPA are intrinsically vulnerable to high order DPA. Often the order of vulnerability is two, and even when it is theoretically more; practically it is one or two. In the next section we will present a method to protect the DES that seem to be efficient against any order DPA attacks. We will first describe the elementary steps of the method for after see how to construct a complete secure DES and why it seems to be secure. 5.1
Masked Rounds
Given any 32 bits value α we will define two new functions S˜1 and S˜2 based on the Sboxes function S. ∀x ∈ {0, 1}48 S˜1 (x) = S(x ⊕ E(α)) ∀x ∈ {0, 1}48 S˜2 (x) = S(x) ⊕ P −1 (α) where E is the expansion permutation and P −1 is the inverse of the permutation after the Sboxes. We define fKi to be the composition of E, the XOR of the ith round subkey Ki the Sboxes and the permutation P . We then define f˜1,Ki and f˜2,Ki by replacing S by S˜1 and S˜2 in f . Remark We can see that f˜1 gives an unmasked value from a α-masked value and that, f˜2 gives a α-masked result from an unmasked one. Using the function f , f˜1 and f˜2 one can obtain 5 different rounds using masked/ unmasked values. The figure 1 represents these five different rounds. The plain fill represents the unmasked value and the dashed fill represents masked values. The following automata (cf fig. 2) shows how these rounds are compatible with each other. The input states are the rounds where the input is unmasked (A and B) and the output states are the one where the output of the rounds are unmasked (A and E). 5.2
Complete DES with Masked Rounds
It is easy to see that one could obtain a 16 round complete DES with these requirements. IP − BCDCDCEBCDCDCDCE − F P represents a correct example (IP represents the initial permutation of DES and FP the final one).
A Generic Protection against High-Order Differential Power Analysis
199
Fig. 1. Masked rounds of DES
Fig. 2. Combination of the rounds
5.3
Security Requirements
In all this section we will consider that the modified Sboxes are already constructed and that the mask α changes at each DES computation. The first step is to analyze in the DES of how many key bits depends the bits of the data at each round. This simple analyze is summarized in the figure 3. We have also considered that the clear and the cipher were known, explaining the symmetry of the figure. To get a correct security we have considered that the critical data are the one where the bits are dependant of less than 36 2 bits of the key. So we can see that only two parts have to be protected: the one connecting R2 and L3 and the one connecting R15 and L16 . We define as usual Li (respectively Ri ) as the left part (respectively the right part) of the message at the end of the ith round. Of course the one depending of none bits of the keys have not to be protected. 2
If we consider that a curve contains 128 8 bits-samples, 36 bits represents an amount of 2 Tb of memory needed.
200
Mehdi-Laurent Akkar and Louis Goubin
Fig. 3. Number of key bits / bits of data
Therefore these values must be masked and oblige the first three rounds to be of the form: BCD or BCE The last three rounds must be of the form: BCE or DCE Taking in account these imperatives IP − BCDCDCEBCDCDCDCE − F P is – for example – a good combination. 5.4
Resistance to DPA
5.4.1 Classical DPA: This countermeasure clearly protect the DES against DPA of order one. Indeed all the value depending of less than 36 bits of the key are masked by a random mask which is used only once. 5.4.2 Enhanced Attacks: First we have to notice that this countermeasure is vulnerable against the superposition method guessing 12 bits of the key.
A Generic Protection against High-Order Differential Power Analysis
201
Indeed the same mask is used in the first and last round of the DES. So to counteract this attack we will from know consider that there’s two different masks α1 and α2 which are used in the first and last round of DES. It is easy to see that the proposed combination of round permit at the 7th and 8th round to switch from α1 to α2 because of the structure of E-round/B-round which leave their output/input unmasked. With evident notations we can get the following example of DES: IP − Bα1 Cα1 Dα1 Cα1 Dα1 Cα1 Eα1 Bα2 Cα2 Dα2 Cα2 Dα2 Cα2 Dα2 Cα2 Eα2 − F P Let now consider n-th order DPA attack. The idea is to correlate several value to get the consumption of an important value. For us an important value is consider to be a value which could be guessed with less than 36 bit of the key. But we have seen that all these value are masked. Moreover the mask appear only once in all the calculus3 , so even with high order correlation it is impossible to get any information about the masked value. 5.5
Variation
– If we want the mask never to appear several times (even on values depending on more than 36 bits of the key) one can use the following combination instead of the proposed one: IP − Bα1 Cα1 Eα1 AAAAAAAAAABα2 Cα2 Eα2 − F P – For paranoid people it is even possible to add two new masks and to mask every values depending on less than 56 bits of the key. – This method is modular: if one uses a protocol where the input or the output are not known, one can eliminate the associated mask.
6
Effective Construction of the Modified S-Boxes
In this section algorithms will be described using pseudo c-code. 6.1
Principle
It is easy to see that the following operation must be performed securely in order to construct the Sboxes S˜1 . – Generate a random α. – Perform a permutation on α (permutation P −1 ). – XOR a value (P −1 (α)) to a table. For the construction of S˜2 , we need to: 3
We remind the reader that we have considered that the tables are already constructed. This part will be analyzed in the next section.
202
Mehdi-Laurent Akkar and Louis Goubin
– Recuperate α because it is the same than in S˜1 . – Permutate it (E(α)). – XOR to a table containing (1..63). Of course securely means that all these operations must be done without giving any information about the consumption of α at any order (1,2 ...). 6.2
Generation of a Random Number: For Example 64 Bits
We consider that we have access to a 64 bytes array t and to a random generator (for example a generator of bytes). We can proceed like the following: – for(i=0..63) { t[i]=rand()%2 } – for(i=0..63) { swap(t[i],t[rand%64]) } With this this method one can see that we get in memory a 64 bits random value and that an attacker just know the hamming weight of α (if he can perform an SPA attack). For this we have considered that the attacker could not in one shot determinate what is the array entry addressed when we swap the entries ; hypothesis which looks quite reasonable. Variant 1: To save time and memory we can imagine the following method which is much faster and does not look too weak. We will get 16 4-bits values in a 16 bytes array: – for(i=0..16) { t[i]=rand() } – for(i=0..16) { swap(t[i] AND 7,t[rand%16] AND 7) } Indeed we can consider that the 4 bits of high weight will strongly influence the consumption. Variant 2: This other method produces and 8 bytes random array. It is faster but less secure. – for(i=0..8) { t[i]=rand() } – for(i=0..16) { t[rand()%8] XOR= rand() } 6.3
Permutation
Classically it can be done bit per bit randomly. Against it only allow the attacker to get the hamming weight of the permuted value. To speed up and have a memory gain, one could perform randomly the permutation quartet per quartet or even byte per byte. An idea could be to add some dummy values and perform the permutation. The dummy values would just not be considered after the permutation time. 6.4
XOR
Here a general method could be to XOR the value bit per bit in a random order to the table. Once again many compromise are possible to perform the XOR: do it byte per byte, add dummy values ...
A Generic Protection against High-Order Differential Power Analysis
6.5
203
Practical Considerations
The usual Sboxes are using 256 bytes. We need them but they could be stored in ROM. For the additional tables we need to store them in RAM. In the normal security method (two masks α1 and α2 ) we need to store 4 new tables. So the total requirement in RAM is of 1024 bytes. We have seen that the construction of the Sboxes could be performed quite securely. Of course the most secure method is very slow and will really slow down the DES execution and use a lot of memory. The idea was just to show that it was theoretically possible to build the table without filtering any information4 with a reasonable model of security5 But we have also seen that it is possible to increase the speed and decrease the memory without loosing too much security. Lets now have a look at how could be applied our countermeasure to the AES algorithm. Due to the higher number of tables (more than 16 instead of 8) and because they are bigger (8→8 bits instead of 6→4) compared to DES, our countermeasure would require about 8 Kb (or 16 Kb for a high security level) of RAM, a size which is too big for usual smart-cards. Some simplifications – which would unfortunately decrease the level of security – are therefore necessary to apply our countermeasure to AES implementation.
7
Real Implementation on the DES algorithm
A real implementation of this method have been completed on an ST19 component. It includes the following features described in the last sections: – SPA protections: Randomization and masking method for the permutations and the manipulation of the key (permutations, Sboxes access...). – DPA protection: HO-DPA Protection of the first and last three rounds of the DES. – S-Boxes constructions is done bit per bit with bit per bit randomization while computing the masking value. – DFA Protection: multiple computation, coherence checking ... With all this features we get an implementation with: – 3 KB of ROM code. – 81 bytes of RAM and 668 bytes of extended RAM – An execution time of 38 ms at 10 Mhz. This implementation have been submitted to our internal SPA/DPA/DFA laboratory which have tried to attack it without success. 4 5
But the hamming weight of the value. The attacker is not able to read the exact memory access in one shot.
204
8
Mehdi-Laurent Akkar and Louis Goubin
Conclusion
Opposed to other proposed countermeasures, the unique masking method presents the following advantages: – It is actually the only protection known against high-order DPA. – The core of the DES is exactly the same than ordinary; so one can use with very light modification its implementation just adding the Sbox generation routine. – The important values are masked with a unique mask which never appear in the DES computation. For example with the transformed masking method the mask were appearing often (for a first mask at the whole beginning and at each rounds). Here one do not even have to mask the entry or unmask the output. – The only part where the mask is appearing (but it could be randomly and bit per bit) does not depend neither of the key and neither of the message. Therefore the security is totally focused at this point. – This method is very flexible and modular without important changes in the code: it could even be a compilation parameter to determine which level of security one wants. – A real implementation have been performed proving the feasibility of this countermeasure in reasonable time (less than 40ms with full protections).
References 1. M.-L. Akkar, R. Bevan, P. Dischamp, D. Moyart, Power Analysis: What is now Possible. In Proceedings of ASIACRYPT’2000, LNCS 1976, pp. 489-502, SpringerVerlag, 2000. 2. M.-L. Akkar, C. Giraud, An Implementation of DES and AES Secure against Some Attacks. In Proceedings of CHES’2001, LNCS 2162, pp. 309-318, Springer-Verlag, 2001. 3. E. Biham, A. Shamir, Power Analysis of the Key Scheduling of the AES Candidates. In Proceedings of the Second Advanced Encryption Standard (AES) Candidate Conference, March 1999. Available from http://csrc.nist.gov/encryption/aes/round1/Conf2/aes2conf.htm 4. S. Chari, C.S. Jutla, J.R. Rao, P. Rohatgi, A Cautionary Note Regarding Evaluation of AES Candidates on Smart-Cards. In Proceedings of the Second Advanced Encryption Standard (AES) Candidate Conference, March 1999. Available from http://csrc.nist.gov/encryption/aes/round1/Conf2/aes2conf.htm 5. S. Chari, C.S. Jutla, J.R. Rao, P. Rohatgi, Towards Sound Approaches to Counteract Power-Analysis Attacks. In Proceedings of CRYPTO’99, LNCS 1666, pp. 398-412, Springer-Verlag, 1999. 6. J.-S. Coron, Resistance Against Differential Power Analysis for Elliptic Curve Cryptosystems. In Proceedings of CHES’99, LNCS 1717, pp. 292-302, SpringerVerlag, 1999. 7. J. Daemen, V. Rijmen, Resistance Against Implementation Attacks: A Comparative Study of the AES Proposals. In Proceedings of the Second Advanced Encryption Standard (AES) Candidate Conference, March 1999. Available from http://csrc.nist.gov/encryption/aes/round1/Conf2/aes2conf.htm
A Generic Protection against High-Order Differential Power Analysis
205
8. J. Daemen, M. Peters, G. Van Assche, Bitslice Ciphers and Power Analysis Attacks. In Proceedings of FSE’2000, LNCS 1978, Springer-Verlag, 2000. 9. L. Goubin, J. Patarin, Proc´ed´e de s´ecurisation d’un ensemble ´electronique de cryptographie a ` cl´e secr`ete contre les attaques par analyse physique. European Patent, SchlumbergerSema, February 4th, 1999, Publication Number: 2789535. 10. L. Goubin, J. Patarin, DES and Differential Power Analysis – The Duplication Method . In Proceedings of CHES’99, LNCS 1717, pp. 158-172, Springer-Verlag, 1999. 11. P. Kocher, J. Jaffe, B. Jun, Introduction to Differential Power Analysis and Related Attacks. Technical Report, Cryptography Research Inc., 1998. Available from http://www.cryptography.com/dpa/technical/index.html 12. P. Kocher, J. Jaffe, B. Jun, Differential Power Analysis. In Proceedings of CRYPTO’99, LNCS 1666, pp. 388-397, Springer-Verlag, 1999. 13. T.S. Messerges, Using Second-Order Power Analysis to Attack DPA Resistant software. In Proceedings of CHES’2000, LNCS 1965, pp. 238-251, Springer-Verlag, 2000. 14. T.S. Messerges, E.A. Dabbish, R.H. Sloan, Investigations of Power Analysis Attacks on Smartcards. In Proceedings of the USENIX Workshop on Smartcard Technology, pp. 151-161, May 1999. Available from http://www.eecs.uic.edu/∼tmesserg/papers.html 15. T.S. Messerges, E.A. Dabbish, R.H. Sloan, Power Analysis Attacks of Modular Exponentiation in Smartcards. In Proceedings of CHES’99, LNCS 1717, pp. 144157, Springer-Verlag, 1999. 16. K. Okeya, K. Sakurai, Power Analysis Breaks Elliptic Curve Cryptosystem even Secure against the Timing Attack . In Proceedings of INDOCRYPT’2000, LNCS 1977, pp. 178-190, Springer-Verlag, 2000.
A New Class of Collision Attacks and Its Application to DES Kai Schramm, Thomas Wollinger, and Christof Paar Department of Electrical Engineering and Information Sciences Communication Security Group (COSY) Ruhr-Universit¨ at Bochum, Germany Universitaetsstrasse 150 44780 Bochum, Germany {schramm,wollinger,cpaar}@crypto.rub.de http://www.crypto.rub.de
Abstract. Until now in cryptography the term collision was mainly associated with the surjective mapping of different inputs to an equal output of a hash function. Previous collision attacks were only able to detect collisions at the output of a particular function. In this publication we introduce a new class of attacks which originates from Hans Dobbertin and is based on the fact that side channel analysis can be used to detect internal collisions. We applied our attack against the widely used Data Encryption Standard (DES). We exploit the fact that internal collisions can be caused in three adjacent S-Boxes of DES [DDQ84] in order to gain information about the secret key-bits. As result, we were able to exploit an internal collision with a minimum of 140 encryptions1 yielding 10.2 key-bits. Moreover, we successfully applied the attack to a smart card processor. Keywords: DES, S-Boxes, collision attack, internal collisions, power analysis, side channel attacks.
1
Introduction
Cryptanalysists have used collisions2 to attack hash functions for years [Dob98, BGW98b]. Most of the previous attacks against hash functions only attacked a few rounds, e.g., three rounds of RIPEMD [Dob97,NIS95]. In [Dob98], Dobbertin revolutionized the field of collision attacks against hash functions by introducing an attack against the full round MD4 hash function [Riv92]. It was shown that MD4 is not collision free and that collisions in MD4 can be found in a few seconds on a PC. Another historic example of breaking an entire hash function is 1
2
depending on the applied measurement hardware and sampling frequency a multiple of 140 plaintexts may have to be sent to the target device in order to average the corresponding power traces, which effectively decrease noise. In the remainder of this publication we do not require an internal collision to be detectable at the output of the cryptographic algorithm.
T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 206–222, 2003. c International Association for Cryptologic Research 2003
A New Class of Collision Attacks and Its Application to DES
207
the COMP128 algorithm [BGW98a]. COMP128 is widely used to authenticate mobile station to base stations in GSM (Global System for Mobile Communication) networks [GSM98]. COMP128’s core building block is a hash function based on a butterfly structure with five stages. In [BGW98b], it was shown that it is possible to cause a collision in the second stage of the hash function, which fully propagates to the output of the algorithm. Hence, a collision can be easily detected revealing information about the secret key. Cryptographers have traditionally designed new cipher systems assuming that the system would be realized in a closed, reliable computing environment, which does not leak any information about the internal state of the system. However, any physical implementation of a cryptographic system will generally provide a side channel leaking unwanted information. In [KJJ99], two practical attacks, Simple Power Analysis (SPA) and Differential Power Analysis (DPA), were introduced. The power consumption was analyzed in order to find the secret keys from a tamper resistant device. The main idea of DPA is to detect regions in the power consumption of a device which are correlated with the secret key. Moreover, little or no information about the target implementation is required. In recent years the were several publications dealing with side channel attacks: side channel analysis of several algorithms, improvements of the original attacks, e.g., higher order DPA and sliding window DPA and hard- and software countermeasures were published [CCD00a, CJR+ 99b, CJR+ 99a, Cor99, FR99, GP99, CCD00b, CC00, Sha00, Mes00, MS00]. Recently, attacks based on the analysis of electromagnetic emission have also been published [AK96, AARR02]. The main idea of this contribution is to combine ‘traditional’ collision attacks with side channel analysis. Traditional collision attacks implied that an internal collision fully propagates to the output of the function. Using side channel analysis it is possible to detect a collision at any state of the algorithm even if it does not propagate to the output. Our Main Contributions A New Class of Collision Attack: The work at hand presents a collision attack against cryptographic functions embedded in symmetric ciphers, e.g., the f -function in DES. The idea, which originally comes from Hans Dobbertin, is to detect collisions within the function by analysis of side channel information, e.g., power consumption. Contrary to previous collision attacks we exploit internal collisions, which are not necessarily detectable at the output. Modified versions of this attack can be potentially applied to any symmetric cipher, in which internal collisions are possible. Furthemore, we believe that our attack is resistant against certain side channel countermeasures, which we will show in future publications. Collisions within the DES f-Function: In [DDQ84], it was first shown that the f-function of DES is not one-to-one for a fixed round key, because collisions can be caused in three adjacent S-Boxes. We discovered that such internal collisions reveal information about the secret key. On average3 140 3
averaged over 10,000 random keys.
208
Kai Schramm, Thomas Wollinger, and Christof Paar
different encryptions are required to find the first collision, a significant lower number of encryptions is required to find further collisions. This result is a breakthrough for future attacks against DES and other cryptographic algorithms vulnerable to internal collisions. Realization of the Attack: Smart cards play an increasingly important role for providing security functions. We applied our attack against an 8051 compatible smart card processor running DES in software. We focussed on the S-Box triple 2,3,4 and were able to gain 10.2 key-bits with 140 encryptions on average including key reduction. We would like to mention that there exists another attack against DES based on internal collision which requires less measurements. This attack was developed by Andreas Wiemers and exploits collisions within the Feistel cipher [Wie03]. The remaining of this publication is organized as follows. Section 2 summarizes previous work on collision attacks, side channel attacks, and DES attacks. In Section 3 we explain the principle of our new attack. In Section 4 we apply our attack to the f -function of DES. In Section 5 further optimizations of our collision attack against DES are given. In Section 6 we compromise an 8051 compatible smartcard processor running DES. Finally, we end this contribution with a discussion of our results and some conclusions.
2
Previous Work
Collision Attacks. The hashing algorithm COMP128 was a suggested implementation of the algorithms A3 and A8 for GSM [GSM98]. Technical details of COMP128 were strictly confidential, however, in 1998 the algorithm was completely reverse engineered [BGW98a]. COMP128 consists of nine rounds and the core building block is a hash function. This hash function itself is based on the butterfly structure and consists of five stages. The output bytes contain a response used for the authentication of the mobile station with the base station and the session key used for the stream cipher A5. In [BGW98b], the COMP128 algorithm was cracked exploiting a weakness in the butterfly structure. Only the COMP128 input bits corresponding to the random number can be varied. A collision can occur in stage 2 of the hash function. It will fully propagate to the output of the algorithm and, as a result, it will be detectable at the output. To launch the attack, one has to vary bytes i + 16 and i + 24 of the COMP128 input and fix the remaining input bytes. The birthday paradox guarantees, that collision will occur rapidly and the colliding bytes are i, i + 8, i + 16, and i + 24. The attack requires 217.5 queries to recover the whole 128-bit key. Most of the presented attacks against hash functions only attacked a few rounds, e.g., three rounds of RIPEMD [Dob97, NIS95]. Also MD4 was first attacked partially. There were approaches to attack the two round MD4 [dBB94, Vau94] (also an unpublished attack from Merkle). In [Dob98], Dobbertin introduced an attack against the whole MD4 hash function [Riv92]. It was shown, that an earlier attack against RIPEMD [Dob97] can be applied to MD4 very
A New Class of Collision Attacks and Its Application to DES
209
efficiently. An algorithm was developed that allows to compute a collision in a few seconds on a PC with a Pentium processor. Finally, it was demonstrated that a further development of the attack could find collisions for meaningful messages. The main result of that contribution was that MD4 is not collisionfree and it requires the same computational effort as 220 computations of the MD4-compression function to find a collision. The basic idea of the attack is that a difference of the input variables can be controlled in such a way that the differences occurring in the computation of the two associated hash values are compensated at the end. Side Channel Attacks. A cryptographic system embedded into a microchip generally consists of many thousand logic gates and storage elements. The power consumption of the system can be analyzed with a shunt resistance put in series between the ground pad of the microchip and the external ground of the voltage source. A digital oscilloscope is used to digitize the voltage over the shunt resistance, which is proportional to the power consumption of the system. Power analysis can be classified into Simple Power Analysis (SPA) and Differential Power Analysis (DPA) [KJJ99, KJJ98]. SPA directly interprets power consumption during cryptographic operations. Hence, an attacker must have detailed information about the target hardware and the implemented algorithm. Two types of information leakage have been observed in SPA: Hamming weight and transition count leakage of internal registers and accumulators [MDS99]. The Hamming weight is often directly proportional to the amount of current that is being discharged from the gate driving the data and address bus4 [MDS99, Mui01]. Transition count information leaks during a gate transition from high to low or low to high when bits of internal registers flip [MDS99]. The main idea of the DPA is to detect regions in the power consumption of a cryptographic device correlated with particular bits of the secret key [KJJ99]. The adversary guesses a key (hypothesis) and encrypts random plaintexts. Depending on a particular observed bit within the algorithm, whose state can be computed based on the prior hypothesis, measured power traces are added or subtracted yielding a differential trace. A correct hypothesis will provide a high correlation of the differential trace with the observed bit, which will be indicated by distinct peaks. Contrary to SPA no information about the target implementation is required. In [KJJ99], it was shown that DES [NIS77] and RSA [RSA78] can be broken by DPA.
3
Principle of the Internal Collision Attack
An internal collision occurs if a function of a cryptographic algorithm computes two different input arguments, but returns an equal output argument. We propose the term ‘internal’ collision, because in general the collision will not propagate to the output of the algorithm. Since we are not able to detect it at the output we correlate side channel information of the cryptographic device, e.g., 4
if a precharged bus design is used.
210
Kai Schramm, Thomas Wollinger, and Christof Paar
power traces, under the assumption that an internal collision will cause a high correlation of different encryptions (decryptions) at one point of time. Moreover, we assume that internal collisions which occur for particular plaintext (ciphertext) encryptions (decryptions) are somehow correlated with the secret key. A typical example of a function vulnerable to internal collisions is a surjective SBox. However many other functions, e.g., based on finite field arithmetics, can cause collisions, too. In this publication, we exploit the fact that is possible to cause a collision in the non-linear f -function of DES in order to gain secret key-bits. In Figure 1 the propagation path of a collision occurring in the f -function of round n is shown. The f -function in round n + 1 processes the same input data, but any further rounds will not be affected by the collision.
0
31 0
31
L
R
32
fk
32
round n L+f k (R)
R
R’=L+fk(R)
L’=R 0
31 0
32
31
fk
32
round n+1 L’+f k(R’)
R’
L’+f k(R’)
R’ 0
31 0
32
31
fk
32
round n+2
Fig. 1. Propagation path of an internal collision in DES.
An adversary encrypts (decrypts) particular plaintexts (ciphertexts) in order to cause an internal collision at one point of the algorithm. Detection of these collisions is possible by correlation of side channel information corresponding to different encryptions (decryptions), e.g., power traces of round n + 1.
A New Class of Collision Attacks and Its Application to DES
211
Collisions within the DES f -Function
4 4.1
Collisions in Single S-Boxes
In this section we briefly remind that it is possible to cause collisions in isolated S-Boxes. However, as stated in [DDQ84] overall collisions in the f -function can only be caused within three S-Boxes simultaneously. For a detailed description of DES the reader is referred to, e.g., [NIS77,MvOV97]. The eight S-Box mappings 26 → 24 are surjective. Moreover, the mappings are uniformly distributed, which means that for each input z ∈ {0, . . . , 26 − 1} of S-Box Si , i ∈ {1, . . . , 8}, there exist exactly three x-or differentials δ1 , δ2 and δ3 ∈ {1, . . . , 26 − 1}, which will cause a collision within a single S-Box Si (z) = Si (z ⊕ δ1 ) = Si (z ⊕ δ2 ) = Si (z ⊕ δ3 ),
δ1 = δ2 = δ3 = 0,
i ∈ {1, . . . , 8}
If, for example, the first S-Box is examined and z = 000000, then there exist three differentials δ1 ,δ2 and δ3 causing a collision: S1 (000000) = S1 (000000 ⊕ 001001 = 001001) = S1 (000000 ⊕ 100100 = 100100) = S1 (000000 ⊕ 110111 = 110111) = 14 However, it is not possible to directly set the six-bit input z of an S-Box. The input z corresponds to a particular six-bit input x entering the f -function. This input x is diffused5 in the expansion permutation and x-ored with six key-bits k of the round key: z =x⊕k ⇔k =x⊕z
k, x, z ∈ {0, . . . , 26 − 1}
A table can be generated for each S-Box, which lists the three differentials δ1 , δ2 and δ3 ∈ {1, . . . , 26 − 1} corresponding to all 64 S-Box inputs z ∈ {0, . . . , 26 − 1}. These eight tables can be resorted in order to list the inputs z ∈ {0, . . . , 26 −1} corresponding to all occurring differentials δi ∈ {1, . . . , 26 −1}. In the remainder of this publication these latter tables will be referred to as the δ-tables (as an example we included the δ-table of S-Box 1 in the appendix). In order to exploit the six key-bits k an adversary chooses a particular δ and varies the input x until he/she detects a collision S(x⊕k) = S(x⊕k⊕δ). The two most and least significant bits of the inputs x and x⊕δ will also enter the adjacent S-Boxes due to the bit spreading of the expansion box. As shown in Figure 2 the inputs of the adjacent S-Boxes only remain unchanged if the two most and least significant bits of differential δ are zero. However, such a differential δ does not exist, which is a known S-Box criterion [Cop94]. Therefore a collision attack targeting a single S-Box while preserving the inputs of the two adjacent S-Boxes is not possible. 5
i.e. the two most and least significant bits of x will be x-ored with particular bits of the round key and then enter the adjacent S-Boxes.
212
Kai Schramm, Thomas Wollinger, and Christof Paar 0
0 0
0 x1 x 2 0
0 0 x 1 x2 0 0
0
0 0
Fig. 2. Required Bit Mask of δ for a Single S-Box Collision.
4.2
Collisions in Three S-Boxes
As stated in [DDQ84] it is possible to cause collisions within three adjacent SBoxes simultaneously. In this case the inputs x and x ⊕ ∆ have a length of 18 bits6 . The differential ∆ = δ1 |δ2 |δ3 denotes the concatenation of three S-Box differentials δ1 , δ2 , δ3 corresponding to each S-Box of the triple. In order not to alter the inputs of the two neighboring S-Boxes to the left and right of the S-Box triple, the two most and least significant bits of ∆ must be zero: ∆[0] = ∆[1] = ∆[16] = ∆[17] = 0 Moreover, in order to propagate through the expansion box, ∆ must fulfil the condition: ∆[4] = ∆[6], ∆[5] = ∆[7], ∆[10] = ∆[12], ∆[11] = ∆[13] Thus ∆ = δ1 |δ2 |δ3 must comply with the bit mask ∆ = 00x1 x2 vwvwx3 x4 yzyzx5 x6 00 with xi , v, w, y, z ∈ {0, 1}, which is shown in Figure 3. 0
0 0
0 x 1 x2 v
w x3 x 4 y
z x5 x6 0
0 0 x1 x2 v w
v w x 3 x4 y z
y z x 5 x6 0 0
0
0 0
Fig. 3. Required S-Box triple ∆ Bit Mask.
Analysis of the δ-tables reveals that there exist many differentials ∆, which comply with the properties stated above. As result, it is possible to cause collisions in an S-Box triple while preserving the inputs of the two neighboring S-Boxes. This means that there exist inputs x and x ⊕ ∆, which cause a collision f (x) = f (x ⊕ ∆) in the f -function. As an example we assume that an adversary randomly varies exactly those 14 input bits of function f in the first round, which enter the targeted S-Box triple. All 50 remaining bits of the plaintext are not changed. Within function f these bits are expanded to the 18 bit input x and x-ored with 18 corresponding key-bits k of the 48 bit round key. The result z = x⊕k enters the targeted S-Box 6
We refer to x and x ⊕ ∆ as the inputs of function f after having propagated through the expansion box, i.e., they have a length of 18 bits, but x, x ⊕ ∆ ∈ {0, . . . , 214 − 1}.
A New Class of Collision Attacks and Its Application to DES
213
triple. The adversary uses power analysis to record the power consumption of the cryptographic device during round two. Next, he sets the input to x ⊕ ∆ and again records the power consumption during round two. A high correlation of the two recorded power traces reveals that the same data was processed in function f in round two, i.e., a collision occurred. Once he detects a collision, analysis of the three corresponding δ-tables will reveal possible key candidates k = z ⊕ x. Let Z∆ denote the set of all possible 18 bit inputs zi causing a collision in a particular S-Box triple for a particular differential ∆. For a fixed x, K is the set of all possible key candidates ki : K = {x ⊕ zi } = {ki }
zi ∈ Z∆
Therefore, the number of key candidates ki is equal to the number of possible S-Box triple inputs zi : |K| = |Z∆ | However, for a particular 18 bit key k only those values of zi can cause collisions for which x = zi ⊕ k can propagate through the expansion box. Hence, we have to check whether all possible keys k ∈ {0, . . . , 218 − 1} can cause collisions for a particular z ∈ Z∆ . In particular, eight bits k[4], k[5], k[6], k[7] and k[10], k[11], k[12], k[13] of the key k determine whether zi ⊕ k yields a valid value of x. In general, we only use those differentials ∆ of an S-Box triple, for which there exist inputs zi which will yield a valid x = zi ⊕ k for any key k ∈ {0, . . . , 218 − 1}. Thus any 18 bit key k can be classified into one of 28 possible key sets Kj , j ∈ {0, . . . , 28 − 1}. The set ZKj of valid S-Box triple inputs zi causing a collision for a given key k ∈ Kj is generally a subset of set Z∆ : Z∆,Kj ⊆ Z∆
j ∈ {0, . . . , 28 − 1}
For a fixed key k ∈ Kj and a random x ∈ {0, . . . , 214 − 1} the probability of a collision is |Z∆,Kj | P (f (x) = f (x ⊕ ∆)|k ∈ Kj ) = 214 In general, two plaintexts x and x⊕∆ have to be encrypted to check for a collision f (x) = f (x⊕∆). The average number of encryptions #M until a collision occurs for a fixed key k is #M =
214 215 2 =2· = P (f (x) = f (x ⊕ ∆)|k ∈ Kj ) |Z∆,Kj | |Z∆,Kj |
The total probability of a collision for an arbitrary key k ∈ Kj is P (f (x) = f (x ⊕ ∆)) =
255
P (f (x) = f (x ⊕ ∆)|k ∈ Kj ) · P (k ∈ Kj )
j=0
= 2−22 ·
255 j=0
|Z∆,Kj |
214
Kai Schramm, Thomas Wollinger, and Christof Paar
The average number of encryptions #M until a collision occurs for an arbitrary key k ∈ Kj is #M = 2 ·
5 5.1
255 255 1 1 1 · = 27 · 256 j=0 P (f (x) = f (x ⊕ ∆)|k ∈ Kj ) |Z | ∆,K j j=0
Optimization of the Collision Attack Multiple Differentials
In order to decrease the number of encryptions until a collision occurs the attack can be extended to n differentials ∆1 , . . . , ∆n yielding a set of 2n possible encryptions f (x), f (x ⊕ ∆1 ), f (x ⊕ ∆2 ), f (x ⊕ ∆2 ⊕ ∆1 ),. . . , f (x ⊕ ∆n ⊕ . . . ⊕ ∆1 ) for a fixed x. We are now looking for collisions between any two encryptions which has the potential to dramatically increase the likelihood of a collision due to the Birthday paradox. A collision f (x ) = f (x ) can only occur, if x ⊕ x equals a differential ∆j , with j ∈ {1, . . . , n}. In Table 1 the costs of the attacks using a single differential ∆ and using n differentials ∆1 , . . . , ∆n are compared. Table 1. Comparison of the collision attacks using a single and multiple differentials.
#x #∆ #M #collision tests
single ∆ multiple ∆’s m m 1 n 2·m m · 2n m m · n · 2n−1
For example using a single ∆ the random generation of m = 64 inputs x will result in #M = 128 encryptions and will only yield m = 64 collision tests f (x) = f (x ⊕ ∆). Using n = 4 differentials ∆1 , . . . , ∆4 the random generation of m = 8 inputs x will also result in #M = 8 · 24 = 128 encryptions, but will yield 8 · 4 · 23 = 256 collision tests. In this example, with the same number of encryptions we are able to perform four times as many collision tests, which results in a higher probability of a collision. As an example, Figure 4 shows a set of 2n = 23 = 8 encryptions for n = 3 differentials ∆1 , ∆2 and ∆3 . In this case n · 2n−1 = 3 · 22 = 12 possible collisions A1, A2, . . . , C4 can occur with the following probabilities: P1 = P (A1) = P (A2) = P (A3) = P (A4) = P (f (x) = f (x ⊕ ∆1 )) P2 = P (B1) = P (B2) = P (B3) = P (B4) = P (f (x) = f (x ⊕ ∆2 )) P3 = P (C1) = P (C2) = P (C3) = P (C4) = P (f (x) = f (x ⊕ ∆3 ))
A New Class of Collision Attacks and Its Application to DES
)
f(x
∆1 )
f(x + ∆2
f(x + f(x + ∆ 3
B1 B2
)
∆ 2 + ∆1 )
f(x +
A1
C1 A2 C2 C3
) ∆1 )
f(x + ∆ 3 + f(x + ∆ 3 + ∆ 2
A3
B3
C4
B4
)
f(x + ∆ 3 + ∆ 2 + ∆ 1 )
215
A4
Fig. 4. Possible collision tests for n = 3 differentials.
If collision tests A1, A2, . . . , C4 are stochastically independent7 , the overall probability can also be expressed as: P ((A1 ∪ A2 ∪ A3 ∪ A4) ∪ (B1 ∪ B2 ∪ B3 ∪ B4) ∪ (C1 ∪ C2 ∪ C3 ∪ C4)) = 1 − [(1 − P (A1)) · (1 − P (A2)) · (1 − P (A3)) · (1 − P (A4)) · (1 − P (B1)) · (1 − P (B2)) · (1 − P (B3)) · (1 − P (B4)) · (1 − P (C1)) · (1 − P (C2)) · (1 − P (C3)) · (1 − P (C4))] ≈ P (A1) + P (A2) + . . . + P (C4) In general, if n differentials are being used and there exist no stochastical dependencies among collision tests, the overall probability that at least one collision will occur within a set of 2n encryptions is P (collision) = 1 − (
n
(1 − Pi ))2
n−1
i=1
with
≈ 2n−1 ·
n
Pi
i=1
Pi = P (f (x) = f (x ⊕ ∆i ))
So far we assumed that collision tests were stochastically independent, i.e. the occurrence of a particular collision does not condition any other collision within a set of encryptions. Surprisingly, analysis of the collision sets Z∆ revealed that stochastical dependencies among collision tests do exist for certain differentials. In general, stochastical dependent collision tests are not desired, because they decrease the overall probability of a collision within a set of encryptions. 5.2
Linear Dependencies
By analysis we discovered that there exist many linear combinations among the differentials ∆ of all eight S-Box triples. In an attack based on multiple differentials ∆1 , . . . , ∆n linear combinations of these will eventually yield additionals 7
i.e. the occurrence of a collision does not depend on any other collision test within a set of 2n encryptions.
216
Kai Schramm, Thomas Wollinger, and Christof Paar
5 4 3 2 1 0
5 4 3 2 1 0
5 4 3 2 1 0
Fig. 5. Further collisions in single S-Boxes.
differentials ∆j . As result, further collision tests can be performed without increasing the number of encryptions. Thus the probability of a collision within a set of 2n encryptions is increased: ∆j = a1 · ∆1 ⊕ . . . ⊕ an · ∆n , ai ∈ {0, 1}
∆j = ∆1 = . . . = ∆n
The improvement achieved by exploiting linear combinations among differentials is shown in the next example. An adversary tries to cause a collision in S-Boxes 2,3,4 using n = 5 differentials ∆3 , ∆13 , ∆15 , ∆16 and ∆21 . Analysis of the δ-tables of S-Boxes 2,3 and 4 reveals that there exist the following linear combinations: ∆1 = ∆3 ⊕ ∆13 ⊕ ∆15 ∆2 = ∆3 ⊕ ∆13 ⊕ ∆16 ∆4 = ∆3 ⊕ ∆15 ⊕ ∆16 ∆14 = ∆13 ⊕ ∆15 ⊕ ∆16 ∆22 = ∆15 ⊕ ∆16 ⊕ ∆21 ∆23 = ∆13 ⊕ ∆15 ⊕ ∆21 ∆24 = ∆13 ⊕ ∆16 ⊕ ∆21 These seven linear combinations will allow the adversary to check 7 · 2n−1 = 112 additional collision tests for each set of 2n = 32 encryptions. The total number of collision tests for a set of 32 encryptions is thus (n + 7) · 2n−1 = 192. 5.3
Key Candidate Reduction
Once a first collision occurred further collisions will provide additional key sets Ki . The intersection Kint of these sets delimits the number of key candidates: Kint = K1 ∩ K2 ∩ . . . ∩ Kj Additional collisions can be found efficiently by fixing the input of two S-Boxes and only varying the input of the third S-Box. Due to the bit spreading in the expansion box not all input bits of the third S-Box can be varied. Only bits 2-5 of the S-Box to the left, bits 2 and 3 of the middle S-Box and bits 0-3 of the S-Box to the right can be varied without altering the inputs of the other two S-Boxes. Analysis of the collision set Z∆ provides all existing x-or differences = z ⊕z with z , z ∈ Z∆ . The theoretical8 maximum of differentials , which only alter 8
disregarding the S-Box disign criteria.
A New Class of Collision Attacks and Its Application to DES
217
oscilloscope data (GPIB) channel 1: current R S 232
power supply + -
scope Rs
PC µC
channel 2: trigger (I/O)
microcontroller board
Fig. 6. Measurement setup for power analysis of a microcontroller.
the input of a single S-Box is 15+3+15 = 33. For any existing further collisions f (x ⊕ ) = f (x ⊕ ⊕ ∆) might be detected. For example an adversary tries to cause collisions in S-Boxes 1,2,3 using differential ∆3 . A first collision f (x) = f (x ⊕ ∆3 ) yields |Z∆3 | = 1120 possible key candidates. Analysis of the collision set Z∆3 reveals that there exist 18 out of 33 differentials i , which comply with the conditions stated above. The adversary tries to find further collisions f (x ⊕ i ) = f (x ⊕ i ⊕ ∆3 ) and is able to detect eight additional collisions delimiting the number of key candidates from 1120 down to 16.
6
Practical Attack
In order to verify the DES collision attack, we simulated it on a PC. In addition, an 8051 compatible microcontroller running a software implementation of DES was successfully compromised using the proposed collision attack. The measurement setup used in this practical attack is shown in Figure 6. In this setup a PC sends chosen plaintexts to the microcontroller and triggers new encryptions. In order to measure the power consumption of the microcontroller a small shunt resistance (here Rs = 10Ω) is put in series between the ground pad and ground of the power supply. We also replaced the original voltage source of the microcontroller with a low-noise voltage source to minimize noise superimposed by the source. The digital oscilloscope HP1662AS was used to sample the voltage over the shunt resistance at 1 GHz. Collisions were caused in the first round of DES. Power traces of round two were transferred to the PC using the GPIB interface. The PC was used to correlate power traces of different encryptions in order to detect collisions. In our experiments we discovered that a correlation coefficient greater than 95% generally indicated a collision. If no collision occurred, the correlation coefficient was always well below 95%, typically ranging from 50% to 80%. In general, uncorrelated noise such as voltage source noise, quantization noise of the oscilloscope or intrinsic noise within the microcontroller can be decreased by averaging power traces of equal encryptions9 . In 9
we assume that no countermeasures such as random dummy cycles are present.
218
Kai Schramm, Thomas Wollinger, and Christof Paar
Fig. 7. Power consumption of the microcontroller encrypting x.
Fig. 8. Power consumption of the microcontroller encrypting x ⊕ ∆.
our experiments we found out that averaging of N = 10 power traces was clearly sufficient to achieve the significant correlation results stated above. Averaging may not even be necessary at all if additional measurement circuitry is used in order to decouple the external voltage source from the target hardware or data is acquired at higher sampling rates. For example, in Figures 7 and 8 the averaged power traces of two different plaintext encryptions x and x ⊕ ∆ during the S-Box look-up in round two is shown. The power traces 7 and 8 clearly differ in peaks. This indicates a low correlation, i.e., no collision occured.
7
Results and Conclusions
We proposed a new kind of attack, which uses side channel analysis to detect internal collisions. In this paper the well known block cipher DES is attacked. However, the attack can be applied to any cryptographic function in which internal collisions are possible. We showed that internal collisions can be caused within three adjacent S-Boxes of DES yielding secret key information. Further-
A New Class of Collision Attacks and Its Application to DES
219
more, we presented different methods in order to minimize the cost of finding such collisions. In our computer simulations we heuristically searched for the optimum combination of differentials ∆i for all eight S-Box triples in order to minimize the number of required encryptions until a collision occured. The results of this exhaustive search are listed in Table 2, where #M denotes the average10 number of encryptions until a collision occurs. #K denotes the average number of key candidates corresponding to 18 key-bits found after applying the key reduction method. As result, we were able to cause a collision in S-Box triple 2,3,4 with a minimum average of 140 encryptions. Using the key reduction method we were able to delimit 18 key-bits to an average of 220 key candidates which is equivalent to log2 (220) ≈ 7.8 key-bits, i.e., 10.2 key bits were broken. Moreover, we were able to cause collisions in S-Box triple 7,8,1 with an average of 165 encryptions yielding on average 19 key candidates, thus breaking 18 − log2 (19) ≈ 13.8 key-bits. Finally, we successfully validated our attack by compromising an 8051 compatible microcontroller running DES in software. Table 2. Results of the exhaustive search for the S-Box triple/∆ set optimum. S-Boxes 1,2,3 2,3,4 3,4,5 4,5,6 5,6,7 6,7,8 7,8,1 8,1,2
#∆ 3 5 3 3 5 5 5 4
∆1 , ∆ 2 , . . . ∆3 , ∆15 , ∆18 ∆3 , ∆13 , ∆15 , ∆16 , ∆21 ∆3 , ∆10 , ∆12 ∆2 , ∆10 , ∆11 ∆2 , ∆5 , ∆8 , ∆23 , ∆29 ∆7 , ∆10 , ∆19 , ∆20 , ∆32 ∆1 , ∆2 , ∆7 , ∆17 , ∆19 ∆1 , ∆2 , ∆8 , ∆38
#M 227 140 190 690 290 186 165 208
#K 20 220 110 71 24 52 19 158
References [AARR02] D. Agrawal, B. Archambeault, J. R. Rao, and P. Rohatgi. The EM Side – Channel(s). In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2002. Springer-Verlag, 2002. [AK96] R. Anderson and M. Kuhn. Tamper Resistance - a Cautionary Note. In Second Usenix Workshop on Electronic Commerce, pages 1–11, November 1996. [BGW98a] M. Briceno, I. Goldberg, and D. Wagner. An Implementation of the GSM A3A8 algorithm, 1998. http://www.scard.org/gsm/a3a8.txt. [BGW98b] M. Briceno, I. Goldberg, and D. Wagner. GSM cloning, 1998. http://www.isaac.cs.berkely.edu/isaac/gsm–faq.html. [CC00] C. Clavier and J.-S. Coron. On Boolean and Arithmetic Masking against Differential Power Analysis. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, volume LNCS 1965, pages 231 – 237. Springer-Verlag, 2000. 10
averaged over 10,000 random keys.
220
Kai Schramm, Thomas Wollinger, and Christof Paar
[CCD00a] C. Clavier, J.S. Coron, and N. Dabbous. Differential Power Analysis in the Presence of Hardware Countermeasures. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, volume LNCS 1965, pages 252–263. Springer-Verlag, 2000. [CCD00b] C. Clavier, J.-S. Coron, and N. Dabbour. Differential Power Anajlysis in the Presence of Hardware Countermeasures. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, volume LNCS 1965, pages 252 – 263. Springer-Verlag, 2000. [CJR+ 99a] S. Chari, C. S. Jutla, J. R. Rao, , and P. Rohatgi. A Cauttionary Note Regarding the Evaluation of AES Condidates on Smart Cards. In Proceedings: Second AES Candidate Conference (AES2), Rome, Italy, March 1999. [CJR+ 99b] S. Chari, C. S. Jutla, J. R. Rao, , and P. Rohatgi. Towards Sound Approaches to Counteract Power-Analysis Attacks. In Advances in Cryptology – CRYPTO ’99, volume LNCS 1666, pages 398 – 412. Springer-Verlag, August 1999. [Cop94] D. Coppersmith. The Data Encryption Standard (DES) and its Strength Against Attacks. Technical report rc 186131994, IBM Thomas J. Watson Research Center, December 1994. [Cor99] J.-S. Coron. Resistance against Differentail Power Analysis for Elliptic Curve Cryptosystems. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 1999, volume LNCS 1717, pages 292 – 302. Springer-Verlag, 1999. [dBB94] B. den Boer and A. Bosselaers. Collisions for the Compression Function of MD5. In T. Hellenseth, editor, Advances in Cryptology – EUROCRYPT ’93, volume LNCS 0765, pages 293 – 304, Berlin, Germany, 1994. SpringerVerlag. [DDQ84] M. Davio, Y. Desmedt, and J.-J. Quisquater. Propagation Characteristics of the DES. In Advances in Cryptology – CRYPTO ’84, pages 62–74. SpringerVerlag, 1984. [Dob97] H. Dobbertin. RIPEMD with two-round compress function is not collisionfree. Journal of Cryptology, 10:51–68, 1997. [Dob98] H. Dobbertin. Cryptanalysis of md4. Journal of Cryptology, 11:253–271, 1998. [FR99] P. N. Fahn and P.K. Rearson. IPA: A New Class of Power Attacks. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 1999, volume LNCS 1717, pages 173 – 186. SpringerVerlag, 1999. [GP99] L. Goubin and J. Patarin. DES and Differential Power Analysis. In C ¸. K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 1999, volume LNCS 1717, pages 158 – 172. Springer-Verlag, 1999. [GSM98] Technical Information – GSM System Security Study, 1998. http://jya.com/gsm061088.htm. [KJJ98] P. Kocher, J. Jaffe, and B. Jun. Introduction to Differential Power Analysis and Related Attacks. http://www.cryptography.com/dpa/technical, 1998. Manuscript, Cryptography Research, Inc. [KJJ99] P. Kocher, J. Jaffe, and B. Jun. Differential Power Analysis. In Advances in Cryptology – CRYPTO ’99, volume LNCS 1666, pages 388–397. SpringerVerlag, 1999.
A New Class of Collision Attacks and Its Application to DES
221
[MDS99]
T. S. Messerges, E. A. Dabbish, and R. H. Sloan. Investigations of Power Analysis Attacks on Smartcards. In USENIX Workshop on Smartcard Technology, pages 151–162, 1999. [Mes00] T. S. Messerges. Using Second-Order Power Analysis to Attack DPA Resistant Software. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, volume LNCS 1965, pages 238 – 251. Springer-Verlag, 2000. [MS00] R. Mayer-Sommer. Smartly Analyzing the Simplicity and the Power of Simple Power Analysis on Smart Cards. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, volume LNCS 1965, pages 78 – 92. Springer-Verlag, 2000. [Mui01] J.A. Muir. Techniques of Side Channel Cryptanalysis. Master thesis, 2001. University of Waterloo, Canada. [MvOV97] A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone. Handbook of Applied Cryptography. CRC Press, Boca Raton, Florida, USA, 1997. [NIS77] NIST FIPS PUB 46-3. Data Encryption Standard. Federal Information Processing Standards, National Bureau of Standards, U.S. Department of Commerce, Washington D.C., 1977. [NIS95] NIST FIPS PUB 180-1. Secure Hash Standard. Federal Information Processing Standards, National Bureau of Standards, U.S. Department of Commerce, Washington D.C., April 1995. [Riv92] R. Rivest. RFC 1320: The MD4 Message-Digest Algorithm. Corporation for National Research Initiatives, Internet Engineering Task Force, Network Working Group, Reston, Virginia, USA, April 1992. [RSA78] R. L. Rivest, A. Shamir, and L. Adleman. A Method for Obtaining Digital Signatures and Public-Key Cryptosystems. Communications of the ACM, 21(2):120–126, February 1978. [Sha00] Adi Shamir. Protecting Smart Cards form Power Analysis with Detached Power Supplies. In C ¸ . K. Ko¸c and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, volume LNCS 1965, pages 71 – 77. Springer-Verlag, 2000. [Vau94] S. Vaudenay. On the need of Multipermutations: Cryptanalysis of MD4 and SAFER. In Fast Software Encryption – FSE ’94, volume LNCS 1008, pages 286 – 297, Berlin, Germany, 1994. Springer-Verlag. [Wie03] A. Wiemers. Partial Collision Search by Side Channel Analysis. Presentation at the Workshop: Smartcards and Side Channel Attacks, January 2003. Horst Goertz Institute, Bochum, Germany.
A
S-Box 1 δ-Table
As an example the δ-table of S-Box 1 lists all inputs z corresponding to occurring differentials δ, which fulfil the condition S1 (z) = S1 (z ⊕ δ). The inputs z are listed in pairs of (z, z ⊕ δ), because both values will fulfil the condition Si (z) = Si (z ⊕ δ) ⇔ Si ((z ⊕ δ)) = Si ((z ⊕ δ) ⊕ δ). For convenience, the column and row position of inputs z within the S-Box matrix is also given in parentheses.
222
Kai Schramm, Thomas Wollinger, and Christof Paar Table 3. S-Box 1: S1 (z) = S1 (z ⊕ δ).
δ #z (z1 ,z1 ⊕ δ), (z2 ,z2 ⊕ δ), ... 000011 14 ((001000(04,0),001011(05,1)), ((010001(08,1),010010(09,0)), ((010101(10,1),010110(11,0)), ((011000(12,0),011011(13,1)), ((011001(12,1),011010(13,0)), ((100101(02,3),100110(03,2)), ((111001(12,3),111010(13,2)) 000101 4 ((000010(01,0),000111(03,1)), ((111011(13,3),111110(15,2)) 000111 2 ((010011(09,1),010100(10,0)) 001001 10 ((000000(00,0),001001(04,1)), ((000011(01,1),001010(05,0)), ((000100(02,0),001101(06,1)), ((000110(03,0),001111(07,1)), ((100000(00,2),101001(04,3)) 001011 2 ((100111(03,3),101100(06,2)) 001101 6 ((010000(08,0),011101(14,1)), ((110001(08,3),111100(14,2)), ((110101(10,3),111000(12,2)) 001111 2 ((100010(01,2),101101(06,3)) 010001 6 ((001110(07,0),011111(15,1)), ((100001(00,3),110000(08,2)), ((100011(01,3),110010(09,2)) 010011 2 ((100100(02,2),110111(11,3)) 010111 4 ((101000(04,2),111111(15,3)), ((101010(05,2),111101(14,3)) 011001 2 ((101111(07,3),110110(11,2)) 011011 4 ((000101(02,1),011110(15,0)), ((001100(06,0),010111(11,1)) 011101 4 ((000001(00,1),011100(14,0)), ((101110(07,2),110011(09,3)) 011111 2 ((101011(05,3),110100(10,2)) 100010 10 ((000010(01,0),100000(00,2)), ((000011(01,1),100001(00,3)), ((001100(06,0),101110(07,2)), ((001111(07,1),101101(06,3)), ((011100(14,0),111110(15,2)) 100100 12 ((000000(00,0),100100(02,2)), ((000110(03,0),100010(01,2)), ((001000(04,0),101100(06,2)), ((010110(11,0),110010(09,2)), ((010111(11,1),110011( 9,3)), ((011000(12,0),111100(14,2)) 100101 6 ((001101(06,1),101000(04,2)), ((010000(08,0),110101(10,3)), ((011101(14,1),111000(12,2)) 100111 10 ((000111(03,1),100000(00,2)), ((001011(05,1),101100(06,2)), ((010101(10,1),110010(09,2)), ((011011(13,1),111100(14,2)), ((011100(14,0),111011(13,3)) 101000 12 ((001110(07,0),100110(03,2)), ((010000(08,0),111000(12,2)), ((010001(08,1),111001(12,3)), ((010010(09,0),111010(13,2)), ((011101(14,1),110101(10,3)), ((011110(15,0),110110(11,2)) 101001 4 ((010100(10,0),111101(14,3)), ((011000(12,0),110001(08,3)) 101010 4 ((000101(02,1),101111(07,3)), ((011011(13,1),110001(08,3)) 101011 12 ((000010(01,0),101001(04,3)), ((000110(03,0),101101(06,3)), ((001010(05,0),100001(00,3)), ((001110(07,0),100101(02,3)), ((010001( 8,1),111010(13,2)), ((010010(09,0),111001(12,3)) 101100 4 ((000100(02,0),101000(04,2)), ((001011(05,1),100111(03,3)) 101101 6 ((001001(04,1),100100(02,2)), ((001111(07,1),100010(01,2)), ((011001(12,1),110100(10,2)) 101110 6 ((000111(03,1),101001(04,3)), ((010011(09,1),111101(14,3)), ((011010(13,0),110100(10,2)) 101111 2 ((001000(04,0),100111(03,3)) 110001 4 ((011010(13,0),101011(05,3)), ((011110(15,0),101111(07,3)) 110010 4 ((001101(06,1),111111(15,3)), ((011001(12,1),101011(05,3)) 110011 4 ((000011(01,1),110000(08,2)), ((000101(02,1),110110(11,2)) 110101 2 ((010110(11,0),100011(01,3)) 110110 2 ((010101(10,1),100011(01,3)) 110111 2 ((000000(00,0),110111(11,3)) 111001 6 ((010011(09,1),101010(05,2)), ((010111(11,1),101110(07,2)), ((011111(15,1),100110(03,2)) 111010 6 ((000001(00,1),111011(13,3)), ((001010(05,0),110000(08,2)), ((011111(15,1),100101(02,3)) 111011 2 ((000100(02,0),111111(15,3)) 111110 4 ((001001(04,1),110111(11,3)), ((010100(10,0),101010(05,2)) 111111 4 ((000001(00,1),111110(15,2)), ((001100(06,0),110011(09,3))
Further Observations on the Structure of the AES Algorithm Beomsik Song and Jennifer Seberry Centre for Computer Security Research School of Information Technology and Computer Science University of Wollongong Wollongong 2522, Australia {bs81,jennifer seberry}@uow.edu.au
Abstract. We present our further observations on the structure of the AES algorithm relating to the cyclic properties of the functions used in this cipher. We note that the maximal period of the linear layer of the AES algorithm is short, as previously observed by S. Murphy and M.J.B. Robshaw. However, we also note that when the non-linear and the linear layer are combined, the maximal period is dramatically increased not to allow algebraic clues for its cryptanalysis. At the end of this paper we describe the impact of our observations on the security of the AES algorithm. We conclude that although the AES algorithm consists of simple functions, this cipher is much more complicated than might have been expected. Keywords: Cyclic Properties, SubBytes transformation, ShiftRows transformation, MixColumns transformation, Maximal period.
1
Introduction
A well-designed SPN (Substitution Permutation Network) structure block cipher, Rijndael [4] was recently (26. Nov. 2001) selected as the AES (Advanced Encryption Standard) algorithm [11]. This cipher has been reputed to be secure against conventional cryptanalytic methods [4, 8], such as DC (Differential Cryptanalysis) [1] and LC (Linear Cryptanalysis) [7], and throughout the AES process the security of the AES algorithm was examined with considerable cryptanalytic methods [2–4, 13, 14]. But despite the novelty of the AES algorithm [5], the fact that the AES algorithm uses mathematically simple functions [6, 12, 15, 16] has led to some commentators’ concern about the security of this cipher. In particular, S. Murphy and M.J.B. Robshaw [15, 16] have modified the original structure of the AES algorithm so that the affine transformation used for generating the S-box (non-linear layer) is included in the linear layer, and have shown that any input to the modified linear layer of the AES algorithm is mapped to itself after 16 iterations of the linear transformation (the maximal period of the modified linear layer is 16 [15, 16]. Based on this observation, they have remarked that the linear layer of the AES algorithm may not be so effective at mixing data. At this stage, to make the concept of “mixing data” clear, we briefly define the T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 223–234, 2003. c International Association for Cryptologic Research 2003
224
Beomsik Song and Jennifer Seberry
effect of mixing data, which Murphy and Robshaw considered. We define that in a set K consisting of n elements, if an input of a function F is mapped to itself after p iterations of the function, then the effect of mixing data is e = np . In this paper, we present our further observations on the AES algorithm in terms of the cyclic properties of the AES algorithm. We examine the cyclic properties of the AES algorithm via each function in the original structure. We note that the maximal period of each function used in the AES algorithm is short, and that the maximal period of the composition of the functions used in the linear layer is also short. We however note that the composition of the non-linear layer and the linear layer dramatically increases the maximal period of the basic structure to highly guarantee the effect of mixing data. Specifically, we have found that: • any input data block of the SubBytes transformation (non-linear layer) returns to the initial state after 277182 (≈ 218 ) repeated applications (the maximal period of the SubBytes transformation is 277182). • any input data block of the ShiftRows transformation (in the linear layer) returns to the initial state after 4 repeated applications (the maximal period of the ShiftRows transformation is 4). • any input data block of the MixColumns transformation (in the linear layer) returns to the initial state after 4 repeated applications as well (the maximal period of the MixColumns transformation is 4). • when the ShiftRows transformation and the MixColumns transformation in the linear layer are considered together, the maximal period is 8. • when the SubBytes transformation (non-linear layer) and the ShiftRows transformation (in the linear layer) are considered together, the maximal period is 554364 (≈ 219 ). More importantly, we have found that the maximal period of the composition of the SubBytes transformation (non-linear layer) and the MixColumns transformation (in the linear layer) is 1,440,607,416,177,321,097,705,832,170,004,940 (≈ 2110 ). Our observations indicate that the structure of the AES algorithm is good enough to bring magnificent synergy effects in mixing data when the linear and the non-linear layers are combined. In the last part of this paper we discuss the relevance of our observations to the security of the AES algorithm. This paper is organised as follows: the description of the AES algorithm is presented in Section 2; the cyclic properties of the functions are described in Section 3; the impact of our observations on the security of the AES algorithm are discussed in Section 4; and the conclusion is given in Section 5.
2
Description of the AES Algorithm
The AES algorithm is an SPN structure block cipher, which processes variablelength blocks with variable-length keys (128, 192, and 256). In the standard case, it processes data blocks of 128 bits with a 128-bit Cipher Key [4, 11]. In this paper we discuss the standard case because the results of our observations will be similar in the other cases.
Further Observations on the Structure of the AES Algorithm i00 i10 i20 i30
i01 i11 i21 i31
i02 i12 i22 i32
i03 i13 i23 i33
SubBytes
225
Non-linear layer Linear layer
Round Key
ShiftRows
MixColumns
⊕
O00 O10 O20 O30
O01 O11 O21 O31
O02 O12 O22 O32
O03 O13 O23 O33
Fig. 1. Basic structure of the AES algorithm.
As Figure 1 shows, the AES algorithm consists of a non-linear layer (SubBytes transformation) and linear layer (ShiftRows transformation and MixColumns transformation). Each byte in the block is bytewise substituted by the SubBytes transformation using a 256-byte S-box, and then every byte in each row is cyclicly shifted by a certain value (row #0: 0, row #1: 1, row #2: 2, row #3: 3) by the ShiftRows transformation. After this, all four bytes in each column are mixed through the MixColumns transformation by the matrix formula in Figure 2. Here, each column is considered as a polynomial over GF (28 ), and multiplied with a fixed polynomial 03 · x3 + 01 · x2 + 01 · x + 02 (modulo x4 + 1). After these operations, a 128-bit round key extended from the Cipher Key is XORed in the last part of the round. The MixColumns transformation is omitted in the last round (10th round), but before the first round a 128-bit initial round key is XORed through the initial round key addition routine. The round keys are derived from the Cipher Key by the following manner: Let us denote the columns in the Cipher Key by CK0 ,CK1 ,CK2 ,CK3 , the columns in the round keys by K0 ,K1 ,K2 ,. . .,K43 , and the round constants by Rcon. Then the columns in the round keys are K0 = CK0 , K1 = CK1 , K2 = CK2 , K3 = CK3 , Kn = Kn−4 ⊕ SubBytes(RotBytes(Kn−1 )) ⊕ Rcon if 4 | n Kn = Kn−4 ⊕ Kn−1 otherwise.
3
Cyclic Properties of the Functions
In this section, we refer to cyclic properties of the functions used in the AES algorithm. The cyclic property of each function is examined first, and then the cyclic properties of the combined functions are obtained. For future reference, we define f n (I) = f ◦ f ◦ f ◦ · · · ◦ f (I).
226
Beomsik Song and Jennifer Seberry
O0c
02 03 01 01
i0c
01 02 03 01
i1c
O2c
01 01 02 03
i2c
O3c
03 01 01 02
i3c
O1c =
Fig. 2. Mixing of four bytes in a column.
3.1
Cyclic Property of Each Function
Cyclic Property of the SubBytes Transformation. From the analysis of 256 substitution values in the S-box, we have found the maximal period of the SubBytes transformation (non-linear layer). Property 1 Every input byte of the S-box returns to the initial value after some t repeated applications of the substitution. In other words, for any input i of the S-box=S, S t (i) = i. The 256 values of the input byte can be classified into five small groups as in Table 1 according to the values of t. The number of values in each group (the period of each group) is 87, 81, 59, 27, and 2 respectively. In Table 1, each value in each group is mapped to the value next to it. For example ‘f2’ → ‘89’ → ‘a7’ → · · · → ‘04’ → ‘f2’, and ‘73’ → ‘8f’ → ‘73’. From Property 1, we can see that although the S-box is a non-linear function, every input block of the SubBytes transformation is mapped to itself after some repeated applications of the SubBytes transformation. Indeed, we see that if each byte in an input block (16 bytes) is ‘8f’ or ‘73’ (in group 5), then this block returns to the initial state after just two applications of the SubBytes transformation. From Property 1, if we consider the L.C.M (Least Common Multiple) of 87, 81, 59, 27, and 2, then we find the following cyclic property of the SubBytes transformation. Property 2 For any input block I of the SubBytes transformation, SubBytes277182 (I) = I. That is, the maximal period of the SubBytes transformation is 277182. The minimal period of the SubBytes transformation is 2 when each byte in the input block I is ‘8f ’ or ‘73’. Cyclic Property of the ShiftRows Transformation. The cyclic property of the ShiftRows transformation is immediately found from the shift values (row #0: 0, row #1: 1, row #2: 2, row #3: 3) in each row.
Further Observations on the Structure of the AES Algorithm
227
Table 1. Classifying the substitution values in the S-box. Group #1 (maximal period: 87) f2, 89, a7, 5c, 4a, d6, f6, 42, 2c, 71, a3, 0a, 67, 85, 97, 88, c4, 1c, 9c, de, 1d, a4, 49, 3b, e2, 98, 46, 5a, be, ae, e4, 69, f9, 99, ee, 28, 34, 18, ad, 95, 2a, e5, d9, 35, 96, 90, 60, d0, 70, 51, d1, 3e, b2, 37, 9a, b8, 6c, 50, 53, ed, 55, fc, b0, e7, 94, 22, 93, dc, 86, 44, 1b, af, 79, b6, 4e, 2f, 15, 59, cb, 1f, c0, ba, f4, bf, 08, 30, 04 Group #2 (maximal period: 81) 7c, 10, ca, 74, 92, 4f, 84, 5f, cf, 8a, 7e, f3, 0d, d7, 0e, ab, 62, aa, ac, 91, 81, 0c, fe, bb, ea, 87, 17, f0, 8c, 64, 43, 1a, a2, 3a, 80, cd, bd, 7a, da, 57, 5b, 39, 12, c9, dd, c1, 78, bc, 65, 4d, e3, 11, 82, 13, 7d, ff, 16, 47, a0, e0, e1, f8, 41, 83, ec, ce, 8b, 3d, 27, cc, 4b, b3, 6d, 3c, eb, e9, 1e, 72, 40, 09, 01 Group #3 (maximal period: 59) 00, 63, fb, 0f, 76, 38, 07, c5, a6, 24, 36, 05, 6b, 7f, d2, b5, d5, 03, 7b, 21, fd, 54, 20, b7, a9, d3, 66, 33, c3, 2e, 31, c7, c6, b4, 8d, 5d, 4c, 29, a5, 06, 6f, a8, c2, 25, 3f, 75, 9d, 5e, 58, 6a, 02, 77, f5, e6, 8e, 19, d4, 48, 52 Group #4 (maximal period: 27) ef, df, 9e, 0b, 2b, f1, a1, 32, 23, 26, f7, 68, 45, 6e, 9f, db, b9, 56, b1, c8, e8, 9b, 14, fa, 2d, d8, 61 Group #5 (maximal period: 2) 73, 8f
* Each value in each table is followed by its substitution value Property 3 For any input block I of the ShiftRows transformation, Shif tRows(Shif tRows(Shif tRows(Shif tRows(I)))) = I. In other words, the maximal period of the ShiftRows transformation is 4. The minimal period of the ShiftRows transformation is 1 when all bytes in the input block I are the same. Cyclic Property of the MixColumns Transformation. In terms of the MixColumns transformation, we have found that the maximal period of this function is 4. Let us look carefully once again at the algebraic structure of the MixColumns transformation described in Section 2. As realised, each input column (four bytes) is considered as a polynomial over GF (28 ) and multiplied modulo x4 + 1 with a fixed polynomial b(x) = 03 · x3 + 01 · x2 + 01 · x + 02. This can be written as a matrix multiplication, as in Figure 2, and from this matrix formula we can obtain the relation between an input column (Ic ) and
228
Beomsik Song and Jennifer Seberry
the corresponding output column (Oc ). Hence, we can find that for any input column Ic (four bytes), M (M (M (M (Ic )))) = Ic where M is the matrix multiplication described in Figure 2. When all four bytes of Ic are the same, M (Ic ) = Ic . If we now consider one input block (four columns) of the MixColumns transformation described in Figure 1, then we find the following property. Property 4 For any input block I (16 bytes) of the MixColumns transformation, M ixColumns(M ixColumns(M ixColumns(M ixColumns(I)))) = I. In other words, the maximal period of the MixColumns transformation is 4. The minimal period of the MixColumns transformation is 1 when the bytes are the same in each column. 3.2
Cyclic Properties of Combined Functions
We now refer to the cyclic properties of cases when the above functions are combined. We first refer to the maximal period of the linear layer (the composition of the ShiftRows transformation and the MixColumns transformation). In the case when the ShiftRows transformation and the MixColumns transformation are considered together, we obtain the maximal period of the linear layer. Property 5 Any input block I of the linear layer is mapped to itself after 8 repeated applications of the linear layer. In other words, the maximal period of the linear layer is 8. From the two minimal periods referred to in Property 3 and Property 4 we obtain the following property. Property 6 Any input block I of the linear layer, in which all bytes are the same, is mapped to itself after one application of the linear layer. That is, the minimal period of the linear layer is 1. When the SubBytes transformation (non-linear layer) and the ShiftRows transformation (in the linear layer) are combined, we obtain the following cyclic property from the L.C.M of the two maximal periods referred to in Property 2 and Property 3. Property 7 Any input block I of the composition of the SubBytes transformation and the ShiftRows transformation is mapped to itself after 554364 repeated applications of the composition. In other words, the maximal period of the composition of the SubBytes transformation and the ShiftRows transformation is 554364.
Further Observations on the Structure of the AES Algorithm
229
Property 8 In Property 7, if all bytes in the input block I are the same and are either ‘73’ or ‘8f ’, then this block is mapped to itself after two repeated applications of the composition. That is, the minimal period of the composition of the SubBytes transformation and the ShiftRows transformation is 2. More importantly, we show that although the maximal periods of both the non-linear layer and the linear layer are short, the maximal period is surprisingly increased in the composition of the non-linear layer and the MixColumns transformation. We first change the order of the SubBytes transformation and the ShiftRows transformation with each other as shown in Figure 3 (b) (the order of these two functions is changeable). i00 i10 i20 i30
i01 i11 i21 i31
i02 i12 i22 i32
i03 i13 i23 i33
i00 i10 i20 i30
i01 i11 i21 i31
i02 i12 i22 i32
i03 i13 i23 i33
i00 i10 i20 i30
i01 i11 i21 i31
i02 i12 i22 i32
S-box
ShiftRows
ShiftRows
S-box
MixColumns
MixColumns
ES-box
⊕
⊕
⊕
i03 i13 i23 i33
ShiftRows
O00 O10 O20 O30
O01 O11 O21 O31
O02 O12 O22 O32
(a)
O03 O13 O23 O33
O00 O10 O20 O30
O01 O11 O21 O31
O02 O12 O22 O32
O03 O13 O23 O33
O00 O10 O20 O30
(b)
O01 O11 O21 O31
O02 O12 O22 O32
O03 O13 O23 O33
(c)
Fig. 3. Re-ordering of SubBytes and ShiftRows.
We then consider the S-box and the MixColumns transformation together. As a result, we obtain an extended S-box, ES-box, which consists of 232 nonlinear substitution paths, as shown in Figure 3 (c) and Table 2. Now, using the same idea used to obtain Property 1, we classify the 232 four-byte input values of the ES-box into 52 small groups according to their periods. The number of values in each group (the period of each group) is 1,088,297,796 (≈ 230 ), 637,481,159 (≈ 229 ), 129,021,490 (≈ 227 ), 64,376,666 (≈ 226 ), and so on. Table 3 shows the classification of all substitution values in the ES-box, which has been obtained from our analysis (see the appendix for more details). From these values of the periods we finally find that the maximal period of the composition of the SubBytes transformation (non-linear layer) and
230
Beomsik Song and Jennifer Seberry Table 2. ES-box. 0x00000000 0x00000001 • • • • 0xabcdef12
I
↓
ES(I)
↓
↓
↓
↓
• • • ↓
↓
0xffffffff ↓
0x63636363 0x7c7c425d • • • • 0x0eb03a4d • • • 0x16161616
Table 3. Classifying the substitution values in the ES-box.
1088297796, 637481159, 637481159, 637481159, 637481159, 129021490, 129021490, 129021490, 129021490, 64376666, 64376666, 11782972, 39488, 16934, 13548, 13548, 10756, 7582, 5640, 5640, 3560, 1902, 1902, 548, 548, 136, 90, 90, 87, 81, 59, 47, 47, 47, 47, 40, 36, 36, 27, 24, 21, 21, 15, 15, 12, 8, 4, 4, 4, 2, 2, 2 e.g. Period of group #1 : 1088297796, Period of group Period of group #6 : 129021490,
#2 : 637481159,
Period of group #12 : 11782972.
the MixColumns transformation (in the linear layer) is 1,440,607,416,177,321, 097,705,832,170,004,940 (≈ 2110 ). Here, we note that the maximal period of this composition is the largest L.C.M of any four values above. This is because one input block consists of four columns. We now discuss shorter periods of the composition of the the SubBytes transformation and the MixColumns transformation which cryptanalysts may be concerned about. We first refer to the minimal period. In very rare cases where each column in an input block I is ‘73737373’, ‘8f8f8f8f’, ‘5da35da3’, ‘c086c086’, ‘a35da35d’ or ‘86c086c0’ (each of these values is mapped to itself after 2 iterations of ES-box: see the appendix), for example, I = 8f8f8f8f c086c086 73737373 5da35da3, the period of the composition of the SubBytes transformation and the MixColumns transformation is 2 (this is the minimal period of the composition of the SubBytes transformation and the MixColumns transformation). We next refer to the periods of the composition of the SubBytes transformation and the MixColumns transformation for input blocks in which all bytes are the same. If all bytes in an input block I of the composition of the SubBytes transformation and the MixColumns transformation are the same, then this block leads to an output block in which all bytes are the same. In this case, the period of the composition of the SubBytes transformation and the MixColumns transformation is the same as the period of the S-box referred to in Table 1. For example, if the
Further Observations on the Structure of the AES Algorithm
231
bytes in an input block I of the combined function of the SubBytes transformation and the MixColumns transformation are all ‘f2’, then this block is mapped to itself after 87 iterations of this combined function (see Group #1 in Table 1 and Period 87 in the appendix). In the next section, we discuss that input blocks having short periods could provide some algebraic clues for cryptanalysis, as some previous works have expected [15, 16]. We show that input blocks having short periods, when compared with others, could have relatively simple hidden algebraic relations with the corresponding output blocks. However, we also note that although in some cases the composition of the non-linear layer and the linear layer has short periods which could provide some algebraic clues for cryptanalysis, the key schedule of the AES algorithm does not allow the short periods to go on.
4
Impact on the Security of the AES Algorithm
In this section, we discuss the impact of our observations on the security of the AES algorithm. We show that input blocks having short periods (the effect of mixing data e = np is very small) are apt to give hidden algebraic clues for cryptanalysis when compared with others. To do this, we first find some input blocks having shortest periods in the composition of the non-linear layer and the linear layer (the SubBytes transformation+the ShiftRows transformation+the MixColumns transformation). Property 9 For any input block I of the composition of the non-linear layer and the linear layer (the SubBytes transformation, the ShiftRows transformation, and the MixColumns transformation), if all bytes in I are the same, then all bytes in the output block are also the same. In this case, the composition of the non-linear layer and the linear layer is equivalent to the S-box because the ShiftRows transformation and the MixColumns transformation do not affect the data transformation. Property 10 For any input block I of the composition of the non-linear layer and the linear layer, if all bytes in I are equal to i (any value), then the period of the composition of the non-linear layer and the linear layer for this input block is the same as the period of the S-box for i. For example, if the bytes in an input block I of the composition of the nonlinear layer and the linear layer are all ‘ef’, then this input block is mapped to itself after 27 iterations (the period of the S-box for ‘ef’ is 27 as given in Table 1). This means that the effect of mixing data of the composition of the non-linear 128 layer and the linear layer is e = 227 is the number of 128 for this input block (2 all possible blocks presented by 128 bits). Property 11 In Property 10, if all bytes in I are the same and are either ‘73’ or ‘8f ’, then I is mapped to itself after 2 iterations of the composition of the non-linear layer and the linear layer. In other words, the minimal period of the composition of the non-linear layer and the linear layer is 2 (the minimal effect 2 ). of mixing data of the non-linear layer and the linear layer is e = 2128
232
Beomsik Song and Jennifer Seberry
We now show that input blocks having short periods could provide some algebraic clues for cryptanalysis if the key schedule of the AES algorithm were not well-designed. Let us assume that contrary to the original key schedule of the AES algorithm, for any Cipher Key in which all bytes are the same, a certain key schedule generates round keys in which each round key has all its bytes the same. (This does not actually appear in the original key schedule.) For example, suppose that the initial round key consists of all ‘78’, that the first round key consists of all ‘6f’, . . ., and that the tenth round key consists of all ‘63’. Then, if we consider the encryption procedure, we see, from Property 9, that any plaintext in which all bytes are the same leads to a ciphertext in which all bytes are the same. This means that if anyone uses, for encryption, a Cipher Key in which all bytes are the same, then attackers will easily become aware of this fact with a chosen plaintext in which all bytes are the same. As long as the attackers realise this fact, it will be easy to find the Cipher Key. They will find the Cipher Key from 256 key searches. However, we note that this scenario does not occur with the original key schedule of the AES algorithm because plaintexts having short periods are not able to keep up the short periods in the original key schedule. For example, we consider the most simple case where a plaintext, in which all bytes are ‘73’, is encrypted with a Cipher Key in which all bytes are ‘00’. In this case, by Property 11, the period of the composition of the non-linear layer and the linear layer is 2 for the intermediate text I0 = 73737373 73737373 73737373 73737373 after the initial round key addition. However, we have found that the period of the composition of the SubBytes transformation (non-linear layer) and the MixColumns transformation (in the linear layer) becomes 1,088,297,796 (≈ 230 ) for the intermediate text I1 = edececec edececec edececec edececec after the first round key addition. We here emphasise once again that although the combined function of the non-linear layer and the linear layer of the AES algorithm has some short periods in rare cases, the key schedule does not allow these short periods to go on, thus denying algebraic clues for its cryptanalysis.
5
Conclusions
We have summarised our further observations on the AES algorithm relating to the cyclic properties of this cipher. Specifically, we have shown that the maximal period of each function used in the AES algorithm is short, and that the maximal period of the composition of the functions used in the linear layer is short as well. However, more importantly, we have also shown that the well-designed structure brings remarkable synergy effects in the cyclic property of this cipher when the linear layer and the non-linear layer are combined. We note that the structure of the AES algorithm is good enough to guarantee high data mixing effects. We also note that although the composition of the non-linear layer and the linear layer of the AES algorithm has, in some cases, short periods which could
Further Observations on the Structure of the AES Algorithm
233
provide some algebraic clues for its cryptanalysis, the well-designed key schedule does not allow these short periods to go on. We believe that the combination of the simple functions in the well-designed structure is one of the advantages of the AES algorithm although some research studies have been recently making considerable progress [9, 10] in the cryptanalysis of the AES-like block ciphers.
References 1. E. Biham and A. Shamir, “Differential cryptanalysis of DES-like cryptosystems”, J. Cryptology, Vol.4, pp.3-72, 1991. 2. E. Biham and N. Keller, “Cryptanalysis of Reduced Variants of Rijndael”, http://csrc.nist.gov/encryption/aes/round2/conf3/aes3papers.html, 2000. 3. H. Gilbert and M. Minier, “A Collision Attack on 7 Rounds of Rijndael”, Proceeding of the Third Advanced Encryption Standard Candidate Conference, NIST, pp.230241, 2000. 4. J. Daemen and V. Rijmen, “AES Proposal: Rijndael”, http://csrc.nist. gov/encryption/aes/rijndael/Rijndael.pdf, 1999. 5. J. Daemen and V. Rijmen, “Answer to New Observations on Rijndael”, AES Forum comment, August 2000, http://www.esat.kuleuven.ac.be/∼rijmen/rijndael/. 6. L. Knudsen and H. Raddum, “Recommendation to NIST for the AES”, Second round comments to NIST, May 2000, http://csrc.nist.gov/encryption/ aes/round2/comments/. 7. M. Matsui, “Linear cryptanalysis method for DES cipher”, Advances in CryptologyEurocrypt’93, Lecture Notes in Computer Science, Springer-Verlag, pp.386-397, 1993. 8. M. Sugita, K. Kobara, K. Uehara, S. Kubota, and H. Imai, “Relationships among Differential, Truncated Differential, Impossible Differential Cryptanalyses against Word-oriented Block Ciphers like Rijndael, E2”, Proceeding of the Third AES Candidate Conference, 2000. 9. N. Courtois and J. Pieprzyk, “Cryptanalysis of Block Ciphers with Overdefined Systems of Equations”, IACR eprint, April 2002, http://www.iacr.org/complete/. 10. N. Courtois and J. Pieprzyk, “Cryptanalysis of Block Ciphers with Overdefined Systems of Equations”, Proceeding of ASIACRYPT’2002, Lecture Notes In Computer Science Vol.2501, pp.267-287, 2002. 11. National Institute of Standard and Technology, “Advanced Encryption Standard(AES)”, FIPS 197, 2001. 12. N. Ferguson, R. Schroeppel, and D. Whiting, “A simple algebraic representation of Rijndael”, Proceeding of SAC’2001, Lecture Notes In Computer Science Vol.2259, pp.103-111, 2001. 13. N. Ferguson, J. Kelsey, S. Lucks, B. Schneier, M. Stay, D. Wagner, and D. Whiting, “Improved Cryptanalysis of Rijndael”, Fast Software Encryption Workshop ’2000, Preproceeding, 2000. 14. S. Lucks, “Attacking Seven Rounds of Rijndael under 192-Bit and 256-Bit Keys”, Proceeding of the Third Advanced Encryption Standard Candidate Conference, NIST, pp.215-229, 2000. 15. S. Murphy and M.J.B Robshaw, “New Observations on Rijndael”, AES Forum comment, August 2000, http://www.isg.rhul.ac.uk/∼sean/. 16. S. Murphy and M.J.B Robshaw, “Further Comments on the Structure of Rijndael”, AES Forum comment, August 2000, http://www.isg.rhul.ac.uk/∼sean/.
234
Beomsik Song and Jennifer Seberry
Appendix: Grouping in the ES-Box Periods 1088297796 637481159 637481159 637481159 637481159 129021490 129021490 129021490 129021490 64376666 64376666 11782972 39488 16934 13548 13548 10756 7582 5640 5640 3560 1902 1902 548 548 136 90 90 87 81 59 47 47 47 47 40 36 36 27 24 21 21 15 15 12 8 4 4 4 2 2 2
Elements in each group 00000003, 7b7b4b53, • • • • • • • • ••, 4487de39 00000002, 77775f4b, • • • • • • • • ••, 3943ffc4 00000004, f2f2cb5a, • • • • • • • • ••, a6284276 00000006, 6f6f777b, • • • • • • • • ••, 24c3a2a6 00000008, 303096c5, • • • • • • • • ••, d4f75ed0 00000001, 7c7c425d, • • • • • • • • ••, 40f39ed7 00000007, c5c59234, • • • • • • • • ••, 25322e95 00000009, 0101c5a7, • • • • • • • • ••, f8bc508a 00000010, caca832a, • • • • • • • • ••, 9660fca0 00000016, 47470f2b, • • • • • • • • ••, c50ccf88 00000142, 330d8ce2, • • • • • • • • ••, e401999a 000000ea, 878754b0, • • • • • • • • ••, 638a2857 00020002, 4b5f4b5f, • • • • • • • • ••, 30a530a5 00010001, 5d425d42, • • • • • • • • ••, 6ad56ad5 00023af9, 468fbf7b, • • • • • • • • ••, 6b5493f6 0005fde6, a1c7299d, • • • • • • • • ••, 8bf1558a 001004ad, e474f2ac, • • • • • • • • ••, 245557ee 00070007, 34923492, • • • • • • • • ••, d740d740 00022db0, 60198ddf, • • • • • • • • ••, feb74bd1 0015e186, 91861d8c, • • • • • • • • ••, 5d50a4a6 00094090, ac1ad06d, • • • • • • • • ••, f6110e3e 0000c22b, b73b421a, • • • • • • • • ••, 07a9ec2e 0021e4f9, 2aa0fc18, • • • • • • • • ••, 76a21d37 00b800b8, 7d727d72, • • • • • • • • ••, 05a905a9 00c600c6, d601d601, • • • • • • • • ••, 85708570 01d266c5, a9fe5e55, • • • • • • • • ••, f554d80d 02338d7f, 3fdf63b8, • • • • • • • • ••, 3c0c694e 0304c1ca, f778e5ef, • • • • • • • • ••, 8683dfa2 f2f2f2f2, 89898989, • • • • • • • • ••, 04040404 7c7c7c7c, 10101010, • • • • • • • • ••, 01010101 00000000, 63636363, • • • • • • • • ••, 52525252 0112dc34, 267c8afb, • • • • • • • • ••, c406421d 018b9ded, b4b1024d, • • • • • • • • ••, 32926cc7 024db4b1, 95eed67c, • • • • • • • • ••, 9ded018b 03c975a2, 2d5cc9b9, • • • • • • • • ••, c0c8d6db 0aff4adf, bcb47f4e, • • • • • • • • ••, 1864fa71 03d603d6, 7af77af7, • • • • • • • • ••, 3e0a3e0a 07f107f1, 0d690d69, • • • • • • • • ••, 17a517a5 efefefef, dfdfdfdf, • • • • • • • • ••, 61616161 03d503d5, 8bf38bf3, • • • • • • • • ••, c6abc6ab 050f050f, 514c514c, • • • • • • • • ••, e344e344 0f050f05, 4c514c51, • • • • • • • • ••, 44e344e3 0e6e0e6e, c3f7c3f7, • • • • • • • • ••, ecbeecbe 6e0e6e0e, f7c3f7c3, • • • • • • • • ••, beecbeec 0327266c, 1eaab216, • • • • • • • • ••, 837b2f79 cac4cac4, a4cca4cc, • • • • • • • • ••, 4a2d4a2d 01828fc8, 5627aa2f, 8fc80182, aa2f5627 27aa2f56, c801828f, 2f5627aa, 828fc801 a37dadf5, 7dadf5a3, adf5a37d, f5a37dad 73737373, 8f8f8f8f 5da35da3, c086c086 a35da35d, 86c086c0
Optimal Key Ranking Procedures in a Statistical Cryptanalysis Pascal Junod and Serge Vaudenay Security and Cryptography Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne, Switzerland {pascal.junod,serge.vaudenay}@epfl.ch
Abstract. Hypothesis tests have been used in the past as a tool in a cryptanalytic context. In this paper, we propose to use this paradigm and define a precise and sound statistical framework in order to optimally mix information on independent attacked subkey bits obtained from any kind of statistical cryptanalysis. In the context of linear cryptanalysis, we prove that the best mixing paradigm consists of sorting key candidates by decreasing weighted Euclidean norm of the bias vector. Keywords: Key ranking, statistical cryptanalysis, Neyman-Pearson lemma, linear cryptanalysis.
1
Introduction
Historically, statistical hypothesis tests, although well-known in many engineering fields, has not been an explicitely widely-used tool in the cryptanalysis of block ciphers. Often, some distinguishing procedures between two statistical distributions are proposed, but without much attention on their optimality. To the best of our knowledge, an unpublished report of Murphy, Piper, Walker and Wild [MPWW95] is the first work where the concept of statistical hypothesis tests is discussed in the context of “modern” cryptanalysis. More recently, a paper of Fluhrer and McGrew [FM01] discussed the performances of an optimal statistical distinguisher in the cryptanalysis of a stream cipher. These tools were again used by Mironov [Mir02], by Coppersmith et al. [CHJ02], by Goli´c and Menicocci [GM] in the same context, for instance, while Junod [Jun03] makes use of them for deriving the asymptotic behaviour of some optimal distinguishers. 1.1
Contributions of This Paper
In this paper, we propose a sound and precise statistical cryptanalytic framework which extends Vaudenay’s one [Vau96]; furthermore, we describe an optimal distinguishing procedure that can be employed during any statistical cryptanalysis involving subkey candidates ranking. As illustration, we apply this distinguishing procedure to the linear cryptanalysis of DES [DES77] as proposed by Matsui in [Mat94]. In the first version of T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 235–246, 2003. c International Association for Cryptologic Research 2003
236
Pascal Junod and Serge Vaudenay
linear cryptanalysis of DES [Mat93], Matsui’s attack returns a subkey which is the correct one with high probability, while a refined version of the attack [Mat94] returns a list of subkeys sorted by maximum-likelihood. This approach, which is very similar to the list decoding paradigm in coding theory, allows to decrease the number of known plaintext-ciphertext pairs needed. Although very simple to implement, Matsui’s key ranking heuristic is however not optimal. We show basically that by sorting the subkey candidates by decreasing sum of squares of the experimental biases, we obtain a ranking procedure which minimizes the costs of the attack’s exhaustive search part. At first sight, the optimisation of the exhaustive search complexity of a cryptanalysis does not seem to be so interesting, since exhaustive search is a “cheap” operation for a cryptanalyst, compared to the cost, or the difficulty of finding the required amount of known plaintext-ciphertext pairs. However, we show in this paper that by optimising the exhaustive search part of a linear cryptanalysis of DES, it is possible to decrease in a sensible way the number of pairs needed and to keep the computational complexity within a reasonable area. In [Jun01], Junod did a complexity analysis and proved that Matsui’s attack against DES performs better than expected, which was already conjectured. He further confirmed this fact experimentally with 21 linear cryptanalysis: given 243 known plaintext-ciphertext pairs, and a success probability equal to 85 %, the computational complexity had an upper bound of 240.75 DES evaluations. In this paper, the power of this technique is illustrated by experimentally demonstrating that one can decrease the computational complexity of Matsui’s attack against DES by an average factor of two, or, equivalently, decrease the number of known plaintext-ciphertext pairs needed by a non-trivial factor (i.e 31 %) without an explosion of the computational complexity (i.e. less than 245 DES evaluations); one can also divide the number of known pairs by two (i.e. to 242 ) while keeping the computational complexity within 247 DES evaluations. Other examples of potential direct application of our optimal ranking rule are Shimoyama-Kaneko’s attack [SK98] on DES which uses quadratic boolean relations, or Knudsen and Mathiassen’s chosen-plaintexts version [KM01] of linear cryptanalysis against DES. However, the ideas behind our ranking method are not restricted to those attacks and may be applied in any statistical cryptanalysis. The rest of this paper is organized as follows: in §2, we recall Vaudenay’s statistical cryptanalysis model and Matsui’s ranking procedures; in §3, we introduce the necessary statistical tools and we propose the Neyman-Pearson ranking procedure. In §4, we apply it to a linear cryptanalysis of DES, we present some experimental results on the improvment and we discuss potential applications to other known attacks. Finally, we give some concluding remarks in §5. 1.2
Notation
The following notation will be used throughout this paper. Random variables X, Y, . . . are denoted by capital letters, while realizations x ∈ X , y ∈ Y, . . . of random variables are denoted by small letters. The fact for a random variable
Optimal Key Ranking Procedures in a Statistical Cryptanalysis
237
1. Counting Phase: Collect several random samples sj = f2 (Pj , Cj ), for j = 1, . . . , n and count all occurences of all the possible values of the sj ’s in |S| counters. 2. Analysis Phase: For each of the subkey candidates i , 1 ≤ i ≤ |L|, count all the occurences in all xi = h3 (i , sj ) and give it a mark µi using the statistic Σ(x1 , . . . , xn ). 3. Sorting Phase: Sort all the candidates i using their mark µi . This list of sorted candidates is denoted U. 4. Searching Phase: Exhaustively try all keys following the sorted list of all the subkey candidates. Fig. 1. Structure of a statistical cryptanalysis.
X to follow a distribution D is denoted X ← D, while its probability density and x distribution functions are denoted by fD (x) and FD (x) = PrX←D [X ≤ x] = f (t)dt, respectively. When the context is clear, we will write simply Pr[X ≤ −∞ D x]. Finally, as usual, “iid” means “independent and identically distributed”.
2
Statistical Cryptanalysis and Key Ranking Procedures
In this paper, we will assume that a given cryptanalysis can be seen as a statistical cryptanalysis, in the sense of Vaudenay’s model [Vau96], and that it uses a key ranking procedure. 2.1
Statistical Cryptanalysis
We recall now briefly the principles of a statistical cryptanalysis. Let P, C and K be the plaintext, cipertext and key space, respectively. A statistical cryptanalysis uses three functions, denoted f1 , f2 and f3 which have the following role: - f1 : K → L is a function which eliminates information of the key unrelated to the cryptanalysis. - f2 : P × C → S, where S is called the sample space, eliminates information about the plaintext and ciphertext spaces unrelated to the attack. - f3 : L × S → Q, where Q is a space summarizing information depending on intermediate results in the encryption. In order to be efficient, a statistical cryptanalysis should fulfil the following conditions: the information x = f3 (, s), where ∈ L, s ∈ S and x ∈ Q, should be computable with small pieces of information on (p, c) ∈ P × C and k ∈ K (namely, s and ); furthermore, the information x = f3 (s, r ) should be statistically distinguishable from x = f3 (s, w ), where r and w is the information given by the right key and a wrong key, respectively. The main idea of the attack consists in assuming that we can distinguish the right key from wrong key with help of a statistical measurement Σ on the observed distribution of the xi ’s. The attack is described in Fig. 1. The data complexity is then defined to be the number n of known plaintext-ciphertext pairs needed in step 1, while the computational complexity is defined to be the number operations in the last phase
238
Pascal Junod and Serge Vaudenay
of the attack. We note that usually, the complexity of steps 2 and 3 is negligible, but it may not be the case in all situations. Key ranking is a technique introduced by Matsui in [Mat94] in order to increase the success probability of a linear cryptanalysis against DES; it corresponds to step 4 in Fig. 1: instead of returning the subkey max possessing the highest mark µmax maxi µi out of |L| subkey candidates, the idea is to return a sorted list U containing key candidates ranked by likelihood and to search for the remaining unattacked bits in this order. Obviously, two central points in a statistical cryptanalysis are the definition of the statistic Σ and of the mark µ which has to be assigned to a subkey candidate. The first issue is the essence of the attack: the cryptanalyst must find a “statistical weakness” in the cipher. In the section §3, we will address the second issue in a general way by using concepts of statistical hypothesis testing and we consider known techniques under this light; before, we recall some generic facts about linear cryptanalysis and the related ranking procedures proposed by Matsui. 2.2
Linear Cryptanalysis and Related Ranking Procedures
We recall briefly the principles of a linear cryptanalysis. The attack’s core is unbalanced linear expressions, i.e. equations involving a modulo two sum of plaintext and ciphertext bits on the left and a modulo two sum of key bits on the right. Such an expression is unbalanced if it is satisfied with probability1 p
1 1 + , 0 < || ≤ 2 2
(1)
when the plaintexts and the key are independent and chosen uniformly at random. Given some plaintext bits Pi1 , . . . , Pir , ciphertext bits Cj1 , . . . , Cjs and key bits Kk1 , . . . , Kkt , and using the notation X[l1 ,...,lu ] Xl1 ⊕ Xl2 ⊕ . . . ⊕ Xlu , we can write a linear expression as P[i1 ,...,ir ] ⊕ C[j1 ,...,js ] = K[k1 ,...,kt ]
(2)
As this equation allows to get only one bit of information about the key, one usually use a linear expression spanning all the rounds but one; it is possible to identify the subkey involved in the last round. One can rewrite (2) as (r) P[i1 ,...,ir ] ⊕ C[j1 ,...,js ] ⊕ F[m1 ,...,mv ] C, K(r) = K[k1 ,...,kt ] (3) Now, one can easily identify the abstract spaces defined in the generic model of Fig. 1: the (sub)key space L is the set of all possible values of the involved subkey 1
In the literature, this non-linearity measure is often called linear probability, and expressed as LPf (a, b) (2 Pr[a · x = b · f (x)] − 1)2 = 42 , where a and b are the masks selecting the plaintext and ciphertext bits, respectively. In this paper, we will refer to the bias for simplicity reasons.
Optimal Key Ranking Procedures in a Statistical Cryptanalysis
239
(i.e. the “interesting” bits of K(r) and those of K[k1 ,...,kt ] ); the sample space S is the set of all possible Pi1 , . . . , Pir and Cj1 , . . . , Cjs , and finally, Q consists in the binary set {0, 1} (i.e. the two possible hyperplanes). The first linear cryptanalysis phase consists in evaluating the bias, or more precisely, the absolute bias, as the cryptanalyst ignores the right part of (3), of the linear expression for all possible subkey candidates and for all known plaintext-ciphertext pairs: n (4) Σ Ψ − 2 where Ψ is the number of times where (3) is equal to 0 (for a given subkey candidate ) and n is the number of known plaintext-ciphertext pairs. In a second phase, the list of subkey candidates is sorted, and the missing key bits are finally searched exhaustively for each subkey candidate until the correct full key is found. The computational complexity of the attack is then related to the number of encryptions needed in the exhaustive search part. The (implicit) mark used to sort the subkey candidates is the following: Definition 1 (Single-List Ranking Procedure). The mark µ given to a subkey candidate is defined to be equal to the bias n µ Σ = Ψ − (5) 2 produced by this subkey . Interestingly, the refined version of linear cryptanalysis described in [Mat94] uses two biased linear expressions involving different 2 key bits subsets. The heuristic proposed by Matsui (which was based on intuition3 ) is the following: Definition 2 (Double-List Ranking Procedure). Let U1 and U2 be two lists of subkey candidates involving disjoint key bits subsets. Sort them independently using the Single-List Ranking Procedure described in Def. 1. Let ρ(U ) () be a function returning the rank of the candidate in the list U. The Double-List Ranking Procedure is then defined as follows: 1. To each candidate = (1 , 2 ) ∈ U1 × U2 , assign the mark µ(1 ,2 ) ρ(U1 ) (1 ) · ρ(U2 ) (2 )
(6)
2. Sort the “composed” candidates by increasing marks µ(1 ,2 ) .
3
An Alternative View on Ranking Procedures
In this section, we recall some well-known statistical hypothesis testing concepts, and we discuss the optimality of the two ranking procedures described above. 2
3
The different problem consisting in dealing with multiple linear approximations has been studied by Kaliski and Robshaw in [KR94]. However, the setting is different than our: they handle the case where one disposes of several linear approximations acting on the same key bits, and they compute the cumulated (resulting) bias. Private communication.
240
3.1
Pascal Junod and Serge Vaudenay
Hypothesis Tests
Let D0 and D1 be two different probability distributions defined on the same finite set X . In a binary hypothesis testing problem, one is given an element x ∈ X which was drawn according either to D0 or to D1 and one has to decide which is the case. For this purpose, one defines a so-called decision rule, which is a function δ : X → {0, 1} taking a sample of X as input and defining what should be the guess for each possible x ∈ X . Associated to this decision rule are two different types of error probabilities: α PrX←D0 [δ(x) = 1] and β PrX←D1 [δ(x) = 0]. The decision rule δ defines a partition of X in two subsets which we denote by A and A, i.e. A ∪ A = X ; A is called the acceptance region of δ. We recall now the Neyman-Pearson lemma, which derives the shape of the optimum statistical test δ between two simple hypotheses, i.e. which gives the optimal decision region A. Lemma 1 (Neyman-Pearson). Let X be a random variable drawn according to a probability distribution D and let be the decision problem corresponding to hypotheses X ← D0 and X ← D1 . For τ ≥ 0, let A be defined by PrX←D0 [x] ≥τ (7) A x∈X : PrX←D1 [x] Let α∗ PrX←D0 A and β ∗ PrX←D1 [A]. Let B be any other decision region with associated probabilities of error α and β. If α ≤ α∗ , then β ≥ β ∗ . Hence, the Neyman-Pearson lemma indicates that the optimum test (regarding error probabilities) in case of a binary decision problem is the likelihood-ratio test. All these considerations are summarized in Def. 3. Definition 3 (Optimal Binary Hypothesis Test). To test X ← D0 against X ← D1 , choose a constant τ > 0 depending on α and β and define the likelihood ratio PrX←D0 [x] (8) lr(x) PrX←D1 [x] The optimal decision function is then defined by 0 (i.e accept X ← D0 ) if lr(x) ≥ τ δopt 1 (i.e. accept X ← D1 ) if lr(x) < τ 3.2
(9)
The Neyman-Pearson Ranking Procedure
We apply now the Neyman-Pearson paradigm to the ranking procedure. One defines the two hypotheses as follows: H0 is the hypothesis that the random variable modeling the statistic Σ (we make here a slightly abuse of notation by assigning the same name to both entities) produced by a given subkey candidate is distributed according DR , i.e. it is distributed as the right subkey candidate, while H1 is the hypothesis that Σ follows the distribution DW , i.e. it is distributed as a false subkey candidate (note that we assume here that
Optimal Key Ranking Procedures in a Statistical Cryptanalysis
241
the “wrong-key randomization hypothesis” [HKM95] holds, i.e. that wrong keys follow all the same distribution): H0 : Σ ← DR H1 : Σ ← DW In this scenario, a type I error (occurring with probability α) means that the correct subkey candidate R , with ΣR ← DR , is decided to be a wrong one; a type II error (occurring with probability β) means that one accepts a wrong candidate W as being the right one. When performing binary hypothesis tests, one usually proceeds as follows: one chooses a fixed α that one is willing to accept, one computes the threshold τ corresponding to α and one defines the following decision rule when given the statistic Σ produced by the candidate : H0 is accepted if fDR (Σ ) ≥ τ fD (Σ ) H1 is accepted if
W
fDR (Σ ) fDW (Σ )
0. Approximations of the Σ distributions are known (we refer to [Jun01] for more details about the derivations of these expressions):
242
Pascal Junod and Serge Vaudenay
fDW (x) =
and fDR (x) =
2 nπ
−
e
8 − 2x2 e n , nπ
2(x−n)2 n
−
+e
for x ≥ 0 2(x+n)2 n
(11)
for x ≥ 0
(12)
The likelihood-ratio is then given by a straighforward calculation. Lemma 2. In the case of a linear cryptanalysis, the likelihood-ratio is given by 2
lr(Σ ) = e−2n · cosh(4Σ ),
Σ ≥ 0
(13)
We can now state the following result. Theorem 1. Matsui’s single-list ranking procedure (as defined in Def. 1) is equivalent to a Neyman-Pearson Ranking Procedure and is furthermore optimal in terms of the number of key tests. Proof: This follows easily from the fact that (13) is a monotone increasing function for increasing Σ ≥ 0 and that the type II error probability is monotonly increasing as the likelihood-ratio is decreasing. ♦ Furthermore, one can easily observe that Matsui’s double-list ranking procedure, although very simple, is not a Neyman-Pearson Ranking Procedure, since it is not a total ordering procedure and it does not make use of the whole information given by each subkey candidate (i.e. it does not use the experimental bias associated to each candidate, but only their ranks). The first observation leads to some ambiguity in the implementation of Def. 2. For instance, should the combination of two candidates having respective ranks equal to 1 and 4 be searched for the unknown key bits before or after the combination consisting of two candidates having both rank 2? In the next section, we illustrate the use of a Neyman-Pearson ranking procedure in the case of a linear cryptanalysis of DES.
4
A Practical Application
Matsui’s refined attack against DES [Mat94] makes use of two linear expressions involving disjoint subsets of key bits; one is the best linear expression on 14 rounds of DES and is used for deriving the second one using a “reversing trick”. Each of them gives information about 13 key bits, the remaining 30 unknown key bits having to be searched exhaustively. We refer to [Mat94] for the detailed description of both linear approximations. In order to derive a Neyman-Pearson ranking procedure, one has to compute the joint probability distribution of the statistics Σ1 and Σ2 furnished by the two linear expressions. As these statistics are dependant of disjoint subsets of the key bits, one can reasonably take the following assumption.
Optimal Key Ranking Procedures in a Statistical Cryptanalysis
243
Assumption 1. For each 1 and 2 , Σ1 and Σ2 are statistically independent, where 1 and 2 denote subkey candidates involving disjoint key subsets. A second assumption neglects the effects of semi-wrong keys, i.e. keys which behave as the right one according to a list only. This is motivated by the fact that, in case of a linear cryptanalysis of DES, the number of such keys is small, and thus their effect on the joint probability distribution is negligible. Assumption 2. For each 1 and 2 , Σ (Σ1 , Σ2 ) is distributed according (1) (2) (1) (2) (1) (2) either to DR = DR × DR or to DW = DW × DW , where DR and DR are the (1) (2) distributions of the right subkey for both key subsets, and DW and DW are the distributions of a right subkey for both key subsets, respectively. Using these two assumptions, the probability density functions defined in (11) and (12), and the fact that the bias of both linear expression is the same and equal to , one can derive the likelihood-ratio: 2
µ(1 ,2 ) = e−4n · cosh(4Σ1 ) · cosh(4Σ2 )
(14)
As (14) is not “numerically” convenient to use, we may approximate it using a Taylor development in terms of , which gives a very intuitive definition of the Neyman-Pearson ranking procedure: µ(1 ,2 ) ≈ 1 + (8Σ21 + 8Σ22 − 4n)2 + O(4 )
(15)
Hence, we can note that it is sufficient to rank the subkey candidates by decreasing values of Σ21 + Σ22 , i.e. the final mark is just the Euclidean distance between an unbiased result and a given sample. We may generalize this result to the case where the biases, which we denote 1 and 2 , are different in both equations; in this case, the likelihood-ratio is given by 1 2 µ(1 ,2 ) = e−2n(2 +2 ) cosh(41 Σ1 ) cosh(42 Σ2 ) (16) A first order approximation is then given by µ(1 ,2 ) ≈ 1 + 8Σ21 21 + 8Σ22 22 − 2n(21 + 22 ) which is equivalent to put a grade equal to µ(1 ,2 ) = rize these facts in the following theorem.
Σ21 21 + Σ22 22 .
(17) We summa-
Theorem 2. Under Assumptions 1 and 2, in a linear cryptanalysis using t approximations on disjoint key bits subsets having each a bias equal to i , 1 ≤ i ≤ t, a procedure ranking the subkey candidates by decreasing µ(1 ,...,t ) =
t
2
(Σi i )
(18)
i=1
is a Neyman-Pearson ranking procedure, and furthermore, it is optimal in terms on key tests. Sketch of the proof : The proof is similar to the one of Theorem 1 and follows from the fact that β is a monotone increasing function when µ(1 ,...,t ) is decreasing. ♦
244
4.1
Pascal Junod and Serge Vaudenay
Experimental Results
The Neyman-Pearson ranking procedure described in the previous section has been simulated in the context of 21 linear cryptanalysis of DES, using the data of [Jun01]. The following table summarises our experimental results on the complexity of the exhaustive search part of the attack given 243 known plaintextciphertext pairs; we use the following notation: µC denotes the average experimental complexity, C85% the maximal complexity given a success probability of 85 %, which is the success probability defined by Matsui in [Mat94], Cmed the median, Cmin and Cmax being the extremal values. Matsui’s Ranking Optimal Ranking
∆
log2 µC
41.4144
40.8723
-31.32 %
log2 C85%
40.7503
40.6022
-9.75 %
log2 Cmed
38.1267
36.7748
-60.71 %
log2 Cmin
32.1699
31.3219
-40.00 %
log2 Cmax
45.4059
44.6236
-41.86 %
These results lead to following observations: – The average complexity is decreased by a factor of about 30 %. Actually, the average complexity is not a good statistical indicator for the average behavior of the linear cryptanalysis, because most cases have a far lower complexity and only 3 cases have a complexity greater than the average. Thus, those three cases have a considerable influence on the average complexity and it is worth examining the median behavior. – A perhaps more significant result is that the median complexity is decreased by a factor of about 60 %. Although one have to be careful with this result because of the small size of the statistical samples number, this value seems to be more accurate regarding the real impact of the improved rule as the average one. – Although the optimal rule decreases the exhaustive search part complexity on average, “pathological” cases where Matsui’s heuristic is better than the Neyman-Pearson ranking procedure can occur. One can explain this by the fact that the Σ densities are sometimes bad approximations of the real ones, several heuristic assumptions being involved. As the data complexity and the computational complexity of a linear cryptanalysis are closely related, it is possible (and desirable in the context of a knownplaintext attack) to convert a gain in the first category to a gain in the second one: even if we decrease sensibly the number of known plaintext-ciphertext pairs, the complexity will remain within reasonable areas: for instance, given 242.46 known plaintext-ciphertext pairs, Cˆ85% = 244.46 DES evaluations, and with only 242 pairs, Cˆ85% = 246.86 ; these experimental values are summarized in the following table:
Optimal Key Ranking Procedures in a Statistical Cryptanalysis
Data complexity
242.00 242.46 243.00
Time complexity
246.86 244.46 240.60
245
Success probability 85 % 85 % 85 % 4.2
Other Attacks
Several published attacks (to the best of our knowledge, all are derived from Matsui’s paper) use key ranking procedures or suggest them as potential improvment. In [SK98], Shimoyama and Kaneko use quadratic boolean approximations of DES’ S-boxes possessing a larger bias. The first part of their attack consists in a traditional linear cryptanalysis, and thus we can apply our optimal ranking procedure; furthermore, another part of their attack consists also in a sorting procedure using Matsui’s heuristic. In [KM01], Knudsen and Mathiassen show how to modifiy Matsui’s attack into a chosen-plaintexts attack in order to reduce the needs of pairs. Their attack can also use the “reversing trick”, i.e. one can apply the same linear characteristic on both encryption and decryption function, in order to derive twice as much key bits. A new time, one could use a key-ranking procedure and our optimal rule to define the order of the subkey candidates during the exhaustive search part.
5
Conclusion
In this paper, we show that considering a statistical cryptanalysis in a hypothesis testing framework allows to define the shape of an optimal distinguisher. We note that one can apply such a distinguisher to various published attacks, all of them being more or less related to Matsui’s linear cryptanalysis as applied against DES. We demonstrate experimentally that our distinguisher, in the case of a classical linear cryptanalysis of DES, allows a non-trivial computational complexity decrease. Simulations on 21 real attacks suggest an average complexity of 240.87 DES evaluations instead of 241.41 , as stated in [Jun01]. If one accepts a 15 % failure probability, which is the usual setting, the complexity had upper bound 240.61 . Equivalently, as exhaustive search operations are typically less costly than the collection of known plaintext-ciphertext pairs, this technique allows to decrease the number of needed pairs and to keep the computational complexity of the attack in cryptanalyst-friendly areas. Our experiments led, with a success probability of 85 %, to 244.85 DES evaluations given 242.46 pairs, or to 246.86 DES evaluations given only 242 pairs. Finally, we would like to outline that statistical hypothesis testing concepts seem to be very useful when considering distinguishing procedures in both theoretical and experimental settings. This seems to be confirmed by the increasing interest of the cryptology community in this kind of mathematical tools.
246
Pascal Junod and Serge Vaudenay
Acknowledgments We would like to thank Thomas Baign`eres and the anonymous reviewers for useful and interesting comments.
References [CHJ02] D. Coppersmith, S. Halevi, and C. Jutla. Cryptanalysis of stream ciphers with linear masking. In Advances in Cryptology – CRYPTO’02, volume 2442 of LNCS, pages 515–532. Springer-Verlag, 2002. [DES77] National Bureau of Standards. Data Encryption Standard. U. S. Department of Commerce, 1977. [FM01] S. R. Fluhrer and D. A. McGrew. Statistical analysis of the alleged RC4 keystream generator. In FSE’00, volume 1978 of LNCS, pages 19–30. SpringerVerlag, 2001. [GM] J.D. Goli´c and R. Menicocci. Edit probability correlation attacks on stop/go clocked keystream generators. To appear in the Journal of Cryptology. [HKM95] C. Harpes, G. Kramer, and J.L. Massey. A generalization of linear cryptanalysis and the applicability of Matsui’s piling-up lemma. In Advances in Cryptology – EUROCRYPT’95, volume 921 of LNCS, pages 24–38. Springer-Verlag, 1995. [Jun01] P. Junod. On the complexity of Matsui’s attack. In Selected Areas in Cryptography, SAC’01, volume 2259 of LNCS, pages 199–211. Springer-Verlag, 2001. [Jun03] P. Junod. On the optimality of linear, differential and sequential distinguishers. To appear in Advances in Cryptology – EUROCRYPT’03, LNCS. Springer-Verlag, 2003. [KM01] L.R. Knudsen and J.E. Mathiassen. A chosen-plaintext linear attack on DES. In FSE’00, volume 1978 of LNCS, pages 262–272. Springer-Verlag, 2001. [KR94] B. S. Kaliski and M. J. B. Robshaw. Linear cryptanalysis using multiple approximations. In Advances in Cryptology – CRYPTO’94, volume 839 of LNCS, pages 26–39. Springer-Verlag, 1994. [Mat93] M. Matsui. Linear cryptanalysis method for DES cipher. In Advances in Cryptology – EUROCRYPT’93, volume 765 of LNCS, pages 386–397. Springer-Verlag, 1993. [Mat94] M. Matsui. The first experimental cryptanalysis of the Data Encryption Standard. In Advances in Cryptology – CRYPTO’94, volume 839 of LNCS, pages 1–11. Springer-Verlag, 1994. [Mir02] I. Mironov. (Not so) random shuffles of RC4. In Advances in Cryptology – CRYPTO’02, volume 2442 of LNCS, pages 304–319. Springer-Verlag, 2002. [MPWW95] S. Murphy, F. Piper, M. Walker, and P. Wild. Likelihood estimation for block cipher keys. Technical report, Information Security Group, University of London, England, 1995. [SK98] T. Shimoyama and T. Kaneko. Quadratic relation of s-box and its application to the linear attack of full round DES. In Advances in Cryptology – CRYPTO’98, volume 1462 of LNCS, pages 200–211. Springer-Verlag, 1998. [Vau96] S. Vaudenay. An experiment on DES statistical cryptanalysis. In 3rd ACM Conference on Computer and Communications Security, pages 139–147. ACM Press, 1996.
Improving the Upper Bound on the Maximum Differential and the Maximum Linear Hull Probability for SPN Structures and AES Sangwoo Park1 , Soo Hak Sung2 , Sangjin Lee3 , and Jongin Lim3 1
3
National Security Research Institute, Korea
[email protected] 2 Department of Applied Mathematics Pai Chai University, Korea
[email protected] Center for Information Security Technologies(CIST) Korea University, Korea {sangjin,jilim}@cist.korea.ac.kr
Abstract. We present a new method for upper bounding the maximum differential probability and the maximum linear hull probability for 2 rounds of SPN structures. Our upper bound can be computed for any value of the branch number of the linear transformation and by incorporating the distribution of differential probability values and linear probability values for S-box. On application to AES, we obtain that the maximum differential probability and the maximum linear hull probability for 4 rounds of AES are bounded by 1.144 × 2−111 and 1.075 × 2−106 , respectively.
1
Introduction
Differential cryptanalysis [2] and linear cryptanalysis [12] are the most wellknown methods of analysing the security of block ciphers. Accordingly, the designer of block ciphers should evaluate the security of any proposed block cipher against differential cryptanalysis and linear cryptanalysis and prove that it is sufficiently invulnerable against them. SPN(Substitution and Permutation Network) structure is one of the most commonly used structure in block ciphers. SPN structure is based on Shannon’s principles of confusion and diffusion [3] and these principles are implemented through the use of substitution and linear transformation, respectively. AES [6, 14], Crypton [11], and Square [5] are block ciphers composed of SPN structures. The security of SPN structures against differential cryptanalysis and linear cryptanalysis depends on the maximum differential probability and the maximum linear hull probability. Hong et al. proved the upper bound on the maximum differential and the maximum linear hull probability for 2 rounds of SPN structures with highly diffusive linear transformation [7]. Kang et al. generalized their result for any value of the branch number of the linear transformation [8]. In [10], Keliher et al. proposed a method for finding the upper bound on the maximum average linear hull probability for SPN structures. Application of T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 247–260, 2003. c International Association for Cryptologic Research 2003
248
Sangwoo Park et al.
their method to AES yields an upper bound of 2−75 when 7 or more rounds are approximated. In [9], it was proposed that the improved upper bound on the maximum average linear hull probability for AES when 9 or more rounds are approximated is 2−92 . In [15], Park et al. proposed a method for upper bounding the maximum differential probability and the maximum linear hull probability for Rijndael-like structures. Rijndael-like structure is a special case of SPN structures. By applying their method to AES, they obtain that the maximum differential probability and the maximum linear hull probability for 4 rounds of AES are bounded by 1.06 × 2−96 . In this paper, we present a new method for upper bounding on the maximum differential probability and the maximum linear hull probability for 2 rounds of SPN structures. Our upper bound can be computed for any value of the branch number of the linear transformation and by incorporating the distribution of differential probability values and linear probability values for S-box. On application to AES, we obtain that the maximum differential probability and the maximum linear hull probability for 4 rounds of AES are bounded by 1.144 × 2−111 and 1.075 × 2−106 , respectively.
2
Backgrounds
One round of SPN structures generally consists of three layers of key addition, substitution, and linear transformation. On the key addition layer, round subkeys and round input values are exclusive-ored. Substitution layer is made up of n small nonlinear substitutions referred to as S-boxes, and the linear transformation layer is a linear transformation used in order to diffuse the cryptographic characteristics of the substitution layer. A typical example of one round of SPN structures is given in Figure 1.
Fig. 1. One round of SPN structure.
On r rounds of SPN structures, the linear transformation of the last round, generally, is omitted, because it has no cryptographic significance. Therefore, 2 rounds of SPN structures is given in Figure 2. S-boxes and linear transformations should be invertible in order to decipher. Therefore we assume that all S-boxes are bijections from Z2m to itself. Moreover, throughout this paper, we assume that round subkeys are independent and uniformly distributed.
Improving the Upper Bound
249
Fig. 2. 2 rounds of SPN structure.
Let S be an S-box with m input and output bits. Differential and linear probability of S are defined as in the following definition: Definition 1. For any given a, b, Γa , Γb ∈ Z2m , define differential probability DP S (a, b) and linear probability LP S (Γa , Γb ) of S by DP S (a, b) = and LP S (Γa , Γb ) =
#{x ∈ Z2m |S(x) ⊕ S(x ⊕ a) = b} 2m 2
#{x ∈ Z2m |Γa · x = Γb · S(x)} −1 2m−1
,
respectively, where x · y denotes the parity(0 or 1) of bitwise product of x and y. a and b are called input and output differences, respectively. Also, Γa and Γb are called input and output mask values, respectively. The strength of an S-box S against differential cryptanalysis is determined by the maximum differential probability, maxa=0,b DP S (a, b). The strength of an S-box S against linear cryptanalysis depends on the maximum linear probability, maxΓa ,Γb =0 LP S (Γa , Γb ). Definition 2. The maximum differential probability p and the maximum linear probability q of S are defined by p = max DP S (a, b) a=0,b
and
q = max LP S (Γa , Γb ), Γa ,Γb =0
respectively. The maximum differential probability p and the maximum linear probability q for a strong S-box S should be small enough for any input difference a = 0 and any output mask value Γb = 0.
250
Sangwoo Park et al.
Definition 3. Differentially active S-box is defined as an S-box given a nonzero input difference and linearly active S-box is defined as an S-box given a nonzero output mask value. Since all S-boxes in substitution layer are bijective, if an S-box is differentially/linearly active, then it has a non-zero output difference/input mask value. For SPN structures, there is a close relationship between the differential probability and the number of differentially active S-boxes. When the number of differentially active S-boxes is large, the differential probability becomes to be small, and when the number of differentially active S-boxes is small, the differential probability becomes to be big. Therefore, the concept of branch number was proposed [5]. We call it the branch number from the viewpoint of differential cryptanalysis, the minimum number of differentially active S-boxes of 2 rounds of SPN structures. Also, we call it the branch number from the viewpoint of linear cryptanalysis, the minimum number of linearly active S-boxes of 2 rounds of SPN structures. The linear transformation L : (Z2m )n −→ (Z2m )n can be represented by an n×n matrix M = (mij ). We have L(x) = M x, where x ∈ (Z2m )n and the addition is done through bitwise exclusive-or. For the block ciphers E2 [13] and Camellia [1], mij ∈ Z2 and the multiplication is trivial. For the block cipher Crypton [11], mij ∈ Z2m and the multiplication is the bitwise logical-and operation. For the block cipher Rijndael [6], mij ∈ GF (2m ) and the multiplication is defined as the multiplication over GF (2m ). It is easy to show that L(x) ⊕ L(x∗ ) = L(x ⊕ x∗ ) and DP L (a, L(a)) = 1 [4]. Definition 4. Let L be the linear transformation over (Z2m )n . The branch number of L from the view point of differential cryptanalysis, βd , is defined by βd = minx=0 {wt(x) + wt(L(x))}, where, wt(x) = wt(x1 , x2 , . . . , xn ) = #{1 ≤ i ≤ n|xi = 0}. Throughout this paper, we define wt(x) = wt(x1 , x2 , . . . , xn ) = #{1 ≤ i ≤ n|xi = 0} when x = (x1 , x2 , . . . , xn ). If x ∈ Z2m , then wt(x) is the Hamming weight of x. It can be proved that, if mij ∈ Z2 , then LP L (M t Γb , Γb ) = 1. Therefore, we know that LP L (Γa , (M −1 )t Γa ) = 1. Also, if mij ∈ GF (2m ), then it can be proved that LP L (Γa , CΓa ) = 1, for some n × n matrix C over GF (2m ) [8]. Therefore, we can define the branch number βl from the view point of linear cryptanalysis as follows: minΓa =0 {wt(Γa ) + wt((M −1 )t Γa )}, if mij ∈ Z2 , 1 ≤ i, j ≤ n, βl = if mij ∈ GF (2m ), 1 ≤ i, j ≤ n. minΓa =0 {wt(Γa ) + wt(CΓa )},
3
Security of 2 Rounds of SPN Structures
In this section, we give an upper bound on the maximum differential probability for 2 rounds of SPN structure. We also give an upper bound on the maximum linear hull probability.
Improving the Upper Bound
251
The following lemma can be considered as a generalized Cauchy-Schwarz inequality. Lemma 1. Let {xi }ni=1 , 1 ≤ j ≤ m, be sequence of real numbers. Then the following inequality is satisfied. m1 n m1 m1 n n n (1) (2) (m) (1) (2) (m) |xi xi · · · xi | ≤ |xi |m |xi |m ··· |xi |m . (j)
i=1
i=1
i=1
i=1
Proof. We will prove the result by using mathematical induction. For m = 2, the result is trivial. Assume that the result holds for m − 1. We have, by the H¨ older’s inequality, that m−1 n m1 n m n (1) (m) m (1) (m−1) (m) (m−1) m−1 m |xi · · · xi xi | ≤ |xi · · · xi | |xi | . i=1
i=1
i=1
By the induction hypothesis, the right hand side is bounded by n m1 m1 n m1 n (1) (m−1) (m) m m m |xi | ··· |xi | |xi | . i=1
i=1
i=1
Thus, the result is proved. From Lemma 1, we get the following lemma. Lemma 2. Let {xi }ni=1 , 1 ≤ j ≤ m, be sequence of real numbers. Then the following inequality is satisfied. (j)
n i=1
(1) |xi
(m) · · · xi |
≤ max {
n i=1
(1) |xi |m , · · ·
,
n i=1
(m) m
|xi
| }.
Theorem 1. Let βd be the branch number of the linear transformation L from the viewpoint of differential cryptanalysis. Then, the maximum differential probability for 2 rounds of SPN structure is bounded by m m 2 2 −1 −1 Si βd Si βd . max {DP (u, j)} , max max {DP (j, u)} max max 1≤i≤n 1≤u≤2m −1 1≤i≤n 1≤u≤2m −1 j=1
j=1
Proof. Let a = (a1 , · · · , an ), b = (b1 , · · · , bn ) be the input difference and output difference, respectively, for 2 rounds of SPN structure. Since DP L (α, L(α)) = 1, the differential probability DP2 (a, b) is given as n n
DP2 (a, b) = DP Si (ai , xi ) DP Sj (yj , bj ) , x
i=1
j=1
252
Sangwoo Park et al.
where y = L(x), x = (x1 , · · · , xn ), and y = (y1 , · · · , yn ). Without loss of generality, we assume that a1 = 0, · · · , ak = 0, ak+1 = 0, · · · , an = 0, b1 = 0, · · · , bl = 0, bl+1 = 0, · · · , bn = 0. Note that if α = 0, β = 0 or α = 0, β = 0, then DP Si (α, β) = 0. Hence, it is enough to consider the following x(and y = L(x)) only in the above summation. x1 = 0, · · · , xk = 0, xk+1 = 0, · · · , xn = 0, y1 = 0, · · · , yl = 0, yl+1 = 0, · · · , yn = 0. We let the solutions of the above system be as follows: t x1 · · · xk (1) (k) 1 x1 · · · x1 (1) (k) 2 x2 · · · x2 .. .. .. . . . (1) (k) δ xδ · · · xδ
y1 · · · yl (1) (l) y1 · · · y 1 (1) (l) y2 · · · y 2 .. .. . . (1) (l) yδ · · · y δ
Then the maximum differential probability DP2 (a, b) can be written as k l δ
(i) (j) DP2 (a, b) = DP Si (ai , xt ) DP Sj (yt , bj ) . t=1
i=1
j=1
By the definition of branch number, it follows that k + l ≥ βd . We divide the proof into two cases: k + l = βd and k + l > βd . (Case 1: k + l = βd ). In this case, we have that, for each i(1 ≤ i ≤ k), (i) (i) x1 , · · · , xδ are distinct, because L is linear and k + l = βd . If, for some (i) (i) (i) (i) i(1 ≤ i ≤ k), x1 , · · · , xδ are not distinct, then there exist a pair (xJ , xJ ) (i) (i) (i) (i) such that xJ = xJ , where xJ is i-th component of x and xJ is i-th component of x , respectively. Therefore, i-th component of x ⊕ x is equal to zero. Since L(x) ⊕ L(x ) = L(x ⊕ x ), this is a contradiction of the definition of branch (j) (j) number. We also have that, for each j(1 ≤ j ≤ l), y1 , · · · , yδ are distinct. From Lemma 2, DP2 (a, b) is bounded by δ δ (1) βd (k) S1 max {DP (a1 , xt )} , · · · , {DP Sk (ak , xt )}βd , t=1 δ
t=1
{DP
S1
(1) (yt , b1 )}βd , · · ·
max max 1≤i≤n 1≤u≤2m −1 max
{DP
Sl
(l) (yt , bl )}βd
t=1
t=1
≤ max
,
δ
max
m 2 −1
m −1 2
1≤i≤n 1≤u≤2m −1
{DP Si (u, j)}βd ,
j=1
j=1
{DP Si (j, u)}βd
.
Improving the Upper Bound (i)
(i)
(j)
253
(j)
(Case 2: k+l > βd ). In this case, x1 , · · · , xδ or y1 , · · · , yδ are not necessarily dintinct. However, when we consider the subset of solutions such that k + l − βd components are fixed(x1 = i1 , . . . , xp = ip , y1 = j1 , . . . , yq = jq ), each of the other βd components has distinct values, where 0 ≤ p ≤ k − 1, 0 ≤ q ≤ l − 1, and p + q = k + l − βd . We denote this subset of solutions by Ai1 ,...,ip ,j1 ,...,jq . Note that Ai1 ,...,ip ,j1 ,...,jq could be the empty set. As in the case 1(or by Lemma 2), we obtain that
= DP
k
DP Si (ai , xi ) DP Sj (yj , bj )
i=1
(x,y)∈Ai1 ,...,ip ,j1 ,...,jq S1
k
j=1
Sp
(a1 , i1 ) · · · DP (ap , ip )DP (j1 , b1 ) · · · DP Sq (jq , bq ) × k k
DP Si (ai , xi ) DP Sj (yj , bj ) i=p+1
(x,y)∈Ai1 ,...,ip ,j1 ,...,jq
j=q+1
(a1 , i1 ) · · · DP (ap , ip )DP (j1 , b1 ) · · · DP Sq (jq , bq ) × m 2 −1 max max max {DP Si (u, j)}βd , 1≤i≤n 1≤u≤2m −1 j=1 m 2 −1 Si βd max max {DP (j, u)} 1≤i≤n 1≤u≤2m −1
≤ DP
S1
S1
Sp
S1
j=1
=: pi1 ,...,ip ,j1 ,...,jq Thus DP2 (a, b) is bounded by m 2 −1
i1 =1
= max
···
m m 2 −1 2 −1
···
ip =1 j1 =1
m 2 −1
max max 1≤i≤n 1≤u≤2m −1 max
pi1 ,...,ip ,j1 ,...,jq
jq =1 m 2 −1
max
1≤i≤n 1≤u≤2m −1
{DP Si (u, j)}βd ,
j=1 m 2 −1
j=1
{DP Si (j, u)}βd
.
From Cases 1 and 2, the result is proved. Corollary 1. Let βd be the branch number of the linear transformation L from the viewpoint of differential cryptanalysis. Then the maximum differential probability for 2 rounds of SPN structure is bounded by pβd −1 , where p is the maximum differential probability for the S-boxes.
254
Sangwoo Park et al.
Proof. By Theorem 1, the maximum differential probability for 2 rounds of SPN structure is bounded by m 2 −1 βd −1 p × max max max DP Si (u, j), 1≤i≤n 1≤u≤2m −1 j=1 m 2 −1 Si max max DP (j, u) = pβd −1 . 1≤i≤n 1≤u≤2m −1 j=1
Theorem 2. Let βl be the branch number of the linear transformation L from the viewpoint of the linear cryptanalysis. The maximum linear hull probability for 2 rounds of SPN structure is bounded by m m 2 2 −1 −1 Si βl Si βl max max . max {DP (u, j)} , max max {DP (j, u)} 1≤i≤n 1≤u≤2m −1 1≤i≤n 1≤u≤2m −1 j=1
j=1
Corollary 2. Let βl be the branch number of the linear transformation L from the viewpoint of linear cryptanalysis. Then the maximum linear hull probability for 2 rounds of SPN structure is bounded by q βl −1 , where q is the maximum linear hull probability for the S-boxes. Hong et al. proved Corollary 1 and 2 when βl = n + 1 or n [7]. Kang et al. proved them for any value of the branch number of the linear transformation [8].
4
Security of AES
AES is a block cipher composed of SPN structures and its linear transformation consists of ShiftRows transformation and MixColumns transformation. Let π : (Z28 )16 −→ (Z28 )16 be the ShiftRows transformation of AES. Let x = (x1 ,x2 ,x3 ,x4 ) = (x11 ,x12 ,x13 ,x14 , x21 , . . ., x34 , x41 ,x42 ,x43 ,x44 ) be the input of π. Figure 3 illustrate the ShiftRows transformation π of AES.
Fig. 3. ShiftRows transformation of AES.
Let y = (y1 ,y2 ,y3 ,y4 ) = (y11 ,y12 ,y13 ,y14 , y21 , . . ., y34 , y41 ,y42 ,y43 ,y44 ) be the output of π. It is easy to check that, for any i(i = 1, 2, 3, 4), each byte of yi comes
Improving the Upper Bound
255
from different xi . For example, for y1 = (y11 , y12 , y13 , y14 ) = (x11 , x22 , x33 , x44 ), x11 is a byte coming from x1 . Furthermore, x22 , x33 and x44 are elements of x2 , x3 and x4 , respectively. The MixColumns transformation of AES operates on the state column by column, treating each column as a four-term polynomial. Let θ = (θ1 , θ2 , θ3 , θ4 ) be the MixColumns transformation of AES. Let y = (y1 , y2 , y3 , y4 ) = (y11 , y12 , y13 , y14 , y21 , . . ., y34 , y41 ,y42 ,y43 ,y44 ) be the input of θ and z = (z1 ,z2 ,z3 ,z4 ) = (z11 ,z12 ,z13 ,z14 , z21 , . . ., z34 , z41 ,z42 ,z43 ,z44 ) be the output of θ, respectively. Each of θi can be written as a matrix multiplication as follows: 02 03 01 01 zi1 yi1 yi2 01 02 03 01 zi2 = yi3 01 01 02 03 · zi3 . 03 01 01 02 yi4 zi4 In the matrix multiplication, the addition is just bitwise exclusive-or and the multiplication is defined as the multiplication over GF (28 ). We can consider each θi as a linear transformation and we know that the branch number of each θi is 5. In [15], the upper bound on the maximum differential probability for 2 rounds of Rijndael-like structure is obtained as follows: Definition 5. Rijndael-like structures are the block ciphers composed of SPN structures satisfying the followings: (i) Their linear transformation has the form (θ1 , θ2 , θ3 , θ4 ) ◦ π. (ii) (The condition of π) Each of bytes of yi comes from each different xi , where x = (x1 , x2 , x3 , x4 ) is input of π and y = (y1 , y2 , y3 , y4 ) is output of π, respectively. (iii) (The condition of θ = (θ1 , θ2 , θ3 , θ4 )) When we consider each of θi as a linear transformation, the followings hold: βdθ1 = βdθ2 = βdθ3 = βdθ4 and βlθ1 = βlθ2 = βlθ3 = βlθ4 . Definition 6. For x = (x1 , . . . , xn ), the pattern of x, γx , is defined by γx = (γ1 , . . . , γn ) ∈ Z2n , where, if xi = 0, then γi = 0, and if xi = 0, then γi = 1. Theorem 3 ([15]). pwt(γπ(a) )(βd −1) , DP2 (a, b) ≤ 0,
if γπ(a) = γb , otherwise.
By Theorem 3, the upper bound on the maximum differential probability for 2 rounds of Rijndael-like structures is pβd −1 . By applying Theorem 3 to AES, it is obtained that the maximum differential probability for 2 rounds of AES is bounded by 2−24 , because βd = 5, p = 2−6 . Note that this result depends on the maximum differential probability of S-box.
256
Sangwoo Park et al.
By applying our result to Theorem 3, new upper bound on the maximum differential probability for 2 rounds of AES can be obtained. We apply Theorem 1 to 2 rounds of AES. Let S be the S-box of AES. If nonzero a ∈ Z28 is fixed, and b varies over Z28 , then the distribution of differential probability of S-box, DP S (a, b) is independent of a, and is given in Table 1. In Table 1, ρi is the differential probability and πi is the number of occurrences of ρi . If nonzero b ∈ Z28 is fixed, and a varies over Z28 , then the same distribution is obtained. Table 1. The distribution of differential probability for AES S-box. i 1 2 3 ρi 2−6 2−7 0 πi 1 126 129
From Theorem 1 and Table 1, we have DP2θi (a, b) ≤ 1.23 × 2−28 .
255
j=1 {DP
S
(1, j)}5 ≈
Theorem 4. When γπ(a) = γb , the upper bound of the maximum differential probability of 2 rounds of AES is as following: DP2 (a, b) ≤ (1.23 × 2−28 )wt(π(a)) . Therefore, the maximum differential probability of 2 rounds of AES is bounded by 1.23 × 2−28 . To compute the upper bound on the maximum differential probability for 4 rounds of AES, we need the following notations: (i)
(i)
(i)
(i)
(i)
(i)
(i)
(i)
(i)
(i)
– x(i) = (x1 , . . . , x4 ) = (x11 , x12 , x13 , x14 , . . . , x41 , x42 , x43 , x44 ): the input of π at i-th round. (i) (i) (i) (i) (i) (i) (i) (i) (i) (i) – y (i) = (y1 , . . . , y4 ) = (y11 , y12 , y13 , y14 , . . . , y41 , y42 , y43 , y44 ): the output of π at i-th round, i.e. the input of θ at i-th round. (i) (i) (i) (i) (i) (i) (i) (i) (i) (i) – z (i) = (z1 , . . . , z4 ) = (z11 , z12 , z13 , z14 , . . . , z41 , z42 , z43 , z44 ): the output of θ at i-th round. Theorem 5. The differential probability for 4 rounds of AES is bounded by 1.144 × 2−111 . Proof. We compute the upper bound on DP4 (a, b) for the value of wt(γπ(a) ) and wt(b). Since βd = 5, if wt(γπ(a) ) + wt(b) ≤ 4, then DP4 (a, b) = 0. Therefore, it is sufficient to compute the upper bound on DP4 (a, b), when wt(γπ(a) )+wt(b) ≥ 5. (Case 1: wt(γπ(a) ) = 4). By Theorem 4, DP2 (a, x(2) )DP2 (z (2) , b) ≤ max DP2 (a, x(2) ) DP4 (a, b) = x(2)
≤ (1.23 × 2−28 )4 ≈ 1.144 × 2−111 .
x(2)
Improving the Upper Bound
257
(Case 2: wt(b) = 4). By Theorem 4, DP4 (a, b) =
DP2 (a, x(2) )DP2 (z (2) , b) ≤ max DP2 (z (2) , b) z (2)
x(2)
≤ (1.23 × 2−28 )4 ≈ 1.144 × 2−111 . (Case 3: wt(γπ(a) ) = 2 and wt(b) = 3). We assume that γπ(a) = (1, 1, 0, 0) and γb = (1, 1, 1, 0). Then we can represent DP4 (a, b) as follows: DP4 (a, b) =
DP2 (a, x(2) )DP2 (z (2) , b)
x(2)
=
4
DP2 (a, x(2) )DP1 (z (2) , b) =: I + II + III + IV.
i=1 x(2) ,wt(z (2) )=i (2)
(2)
(3)
We know that wt(yi ) ≤ wt(x(2) ) = wt(γπ(a) ) = 2 and wt(zi ) = wt(xi ) ≤ (2) (2) wt(b) = 3. Since βdθi = 5, we obtain that wt(yi ) = 2 and wt(zi ) = 3, where (2) (2) yi and zi are the nonzero components of y (2) and z (2) , respectively. Note that (2) (2) yi is the input mask of θi and zi is the output mask of θi . Now, we compute the value of I. We can represent I as follows:
I =
DP2 (a, x(2) )DP2 (y (2) , b)
x(2) ,γy(2) =(1,0,0,0)
+
DP2 (a, x(2) )DP2 (y (2) , b)
x(2) ,γy(2) =(0,1,0,0)
+
DP2 (a, x(2) )DP2 (y (2) , b)
x(2) ,γy(2) =(0,0,1,0)
+
DP2 (a, x(2) )DP2 (y (2) , b)
x(2) ,γy(2) =(0,0,0,1)
=: I1 + I2 + I3 + I4 At first, we compute the value of I1 . Since γx(2) = γπ(a) = (1, 1, 0, 0), γz(2) = (2) (1, 0, 0, 0), and, wt(y1 ) = 2, from the definition of π, we obtain that x(2) = (2) (2) (x11 , 0, 0, 0, 0, 0, 0, x24 , 0, 0, 0, 0, 0, 0, 0, 0). Furthermore, since γz(2) = γx(3) , (3)
(3)
(2)
1
1
γz = γy = γb = (1, 1, 1, 0), and, wt(z1 ) = 3, we obtain that z (2) = (2) (2) (2) (2) (2) (2) (2) (z11 , z12 , z13 , 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0). (x11 , 0, 0, x24 ) and (z11 , z12 , (2) z13 , 0) are the nonzero input mask and output mask of θ1 , respectively. Since (2) (2) (2) (2) (2) βdθ1 = 5, each of x11 , x24 , z11 , z12 , z13 is of distinct value. Therefore, we can establish the following:
258
Sangwoo Park et al.
I1 =
DP2θ1 (a∗1 , (x11 , 0, 0, 0))DP2θ2 (a∗2 , (0, 0, 0, x24 ))DP2 (y (2) , b) (2)
(2)
(2)
x(2) ,γy =(1,0,0,0)
≤ P4
DP2θ1 (a∗1 , (x11 , 0, 0, 0)), (2)
(2)
x11
where P = 1.23 × 2−28 , the upper bound of DP2θi (a, b). By applying the same method, the upper bounds of I2 , I3 and I4 can be determined. (2) (2) I ≤ P4 DP2θi (a∗1 , (x11 , 0, 0, 0)) + DP2θi (a∗1 , (0, x12 , 0, 0)) (2)
(2)
x11
+
x12
DP2θi (a∗1 , (0, 0, x13 , 0)) + (2)
(2)
(2) DP2θi (a∗1 , (0, 0, 0, x14 )) .
(2)
x13
x14
Using the same method, we arrive at the followings: (2) DP2θ1 (a∗1 , x1 ) II ≤ P 4 (2)
wt(x1 )=2
III ≤ P 4
DP2θ1 (a∗1 , x1 ) (2)
(3)
wt(x1 )=2
IV ≤ P 4
DP2θ1 (a∗1 , x1 ) (2)
(4)
wt(x1 )=2
Therefore, DP4 (a, b) ≤ I + II + III + IV ≤ P 4
DP2θi (a∗1 , x1 ) = P 4 (2)
(2)
x1
≤ (1.23 × 2
−28 4
) ≈ 1.144 × 2−111 .
(Case 4: wt(γπ(a) ) = 3 and wt(b) = 2). The proof is similar to that of Case 3 and we arrive at the following: DP4 (a, b) ≤ (1.23 × 2−28 )4 ≈ 1.144 × 2−111 . (Case 5: wt(γπ(a) ) = 3 and wt(b) = 3). The proof is similar to that of Case 3 and we arrive at the following: DP4 (a, b) ≤ (1.23 × 2−28 )4 ≈ 1.144 × 2−111 . The distribution of linear probability value LP S (a, b) for AES S-box is given in the Table 2. In the table, ρi is the linear probability value and φi is the number of occurence of ρi . 255 From Theorem 2 and Table 2, we have LP2θi (a, b) ≤ j=1 {LP S (1, j)}5 ≈ 1.44 × 2−27 . Using the similar method as in Theorem 5, we can compute the upper bound on the linear hull probability for 4 rounds of AES.
Improving the Upper Bound
259
Table 2. The distribution of linear probability values for AES S-box. i
1
8 2 ρi ( 64 )
φi
5
2
3
4
5
6
7
8
9
7 2 ( 64 )
6 2 ( 64 )
5 2 ( 64 )
4 2 ( 64 )
3 2 ( 64 )
2 2 ( 64 )
1 2 ( 64 )
0
16
36
24
34
40
36
48 17
Theorem 6. The linear probability of 4 rounds of AES is bounded by (1.44 × 2−27 )4 ≈ 1.075 × 2−106 . We know that the differential probabilities for r(r ≥ 5) rounds of AES are smaller than or equal to the maximum differential probability for 4 rounds of AES. DP5 (a, b) = DP4 (a, x(4) )DP1 (z (4) , b) ≤ max DP4 (a, x(4) ). x(4)
x(4)
Therefore, the upper bound on the maximum differential probability in Theorem 5 is the upper bound for r(r ≥ 5) rounds of AES. Similarly, the maximum linear hull probability for 4 rounds of AES in Theorem 6 is the upper bound for r(r ≥ 5) rounds of AES.
5
Conclusion
In this paper, we have obtained a new upper bound on the maximum differential probability and the maximum linear hull probability for 2 rounds of SPN structure. Our upper bound can be computed for any value of the branch number of the linear transformation. By applying this result, we have proved that the maximum differential probability for 4 rounds of AES is bounded by 1.144 × 2−111 . Also, we have proved that the maximum linear hull probability for 4 rounds of AES is bounded by 1.075 × 2−106 .
References 1. Kazumaro Aoki, Tetsuya Ichikawa, Masayuki Kanda, Mitsuru Matsui, Shiho Moriai, Junko Nakajima, and Toshio Tokita. Camellia: A 128-bit block cipher suitable for multiple platforms - design and analysis. In Douglas R. Stinson and Stafford Tavares, editors, Selected Areas in Cryptography, volume 2012 of Lecture Notes in Computer Science, pages 39–56. Springer, 2000. 2. Eli Biham and Adi Shamir. Differential cryptanalysis of DES-like cryptosystems. Journal of Cryptology, 4(1):3–72, 1991. 3. C.E.Shannon. Communication Theory of Secrecy System. Bell System Technical Journal, 28:656–715, October 1949. 4. Joan Daemen, Ren´e Govaerts, and Joos Vandwalle. Correlation matrices. In Bart Preneel, editor, Fast Software Encryption, Second International Workshop, volume 1008 of Lecture Notes in Computer Science, pages 275–285. Springer, 1994.
260
Sangwoo Park et al.
5. Joan Daemen, Lars R. Knudsen, and Vincent Rijmen. The block cipher square. In Eli Biham, editor, Fast Software Encryption, 4th International Workshop, volume 1267 of Lecture Notes in Computer Science, pages 149–165. Springer, 1997. 6. Joan Daemen and Vincent Rijmen. Rijndael, AES Proposal. http://www.nist.gov/aes, 1998. 7. Seokhie Hong, Sangjin Lee, Jongin Lim, Jaechul Sung, Donghyeon Cheon, and Inho Cho. Provable security against differential and linear cryptanalysis for the SPN structure. In Bruce Schneier, editor, Fast Software Encryption, 7th International Workshop, volume 1978 of Lecture Notes in Computer Science, pages 273–283. Springer, 2000. 8. Ju-Sung Kang, Seokhie Hong, Sangjin Lee, Okyeon Yi, Choonsik Park, and Jongin Lim. Practical and provable security against differential and linear cryptanalysis for substitution-permutation networks. ETRI Journal, 23(4):158–167, 2001. 9. Liam Keliher, Henk Meijer, and Stafford Tavares. Improving the upper bound on the maximum average linear hull probability for Rijndael. In Serge Vaudenay and Amr M. Youssef, editors, Selected Areas in Cryptography, 8th Annual International Workshop, volume 2259 of Lecture Notes in Computer Science, pages 112–128. Springer, 2001. 10. Liam Keliher, Henk Meijer, and Stafford Tavares. New method for upper bounding the maximum average linear hull probability for SPNs. In Birgit Pfitzmann, editor, Advances in Cryptology - Eurocrypt 2001, volume 2045 of Lecture Notes in Computer Science, pages 420–436. Springer-Verlag, Berlin, 2001. 11. Chae Hoon Lim. CRYPTON, AES Proposal. http://www.nist.gov/aes, 1998. 12. Mitsuru Matsui. Linear cryptanalysis method for DES cipher. In Tor Helleseth, editor, Advances in Cryptology - Eurocrypt’93, volume 765 of Lecture Notes in Computer Science, pages 386–397. Springer-Verlag, Berlin, 1994. 13. NTT-Nippon Telegraph and Telephone Corporation. E2: Efficient Encryption algorithm, AES Proposal. http://www.nist.gov/aes, 1998. 14. National Institute of Standards and Technology. FIPS PUB 197 : Advanced Encryption Standard(AES), November 2001. 15. Sangwoo Park, Soo Hak Sung, Seongtaek Chee, E-Joong Yoon, and Jongin Lim. On the security of Rijndael-like structures against differential and linear cryptanalysis. In Yuliang Zheng, editor, Advances in Cryptology - Asiacrypt 2002, volume 2501 of Lecture Notes in Computer Science, pages 176–191. Springer, 2002.
Linear Approximations of Addition Modulo 2n Johan Wall´en Laboratory for Theoretical Computer Science Helsinki University of Technology P.O.Box 5400, FIN-02015 HUT, Espoo, Finland
[email protected] Abstract. We present an in-depth algorithmic study of the linear approximations of addition modulo 2n . Our results are based on a fairly simple classification of the linear approximations of the carry function. Using this classification, we derive an Θ(log n)-time algorithm for computing the correlation of linear approximation of addition modulo 2n , an optimal algorithm for generating all linear approximations with a given non-zero correlation coefficient, and determine the distribution of the correlation coefficients. In the generation algorithms, one or two of the selection vectors can optionally be fixed. The algorithms are practical and easy to implement. Keywords: Linear approximations, correlation, modular addition, linear cryptanalysis.
1
Introduction
Linear cryptanalysis [8] is one of the most powerful general cryptanalytic methods for block ciphers proposed by date. Since its introduction, resistance against this attack has been a standard design goal for block ciphers. Although some design methodologies to achieve this goal have been proposed—for example [12, 10, 4, 13]—many block ciphers are still designed in a rather ad hoc manner, or dictated by other primary design goals. For these ciphers, it it important to have efficient methods for evaluating their resistance against linear cryptanalysis. At the heart of linear cryptanalysis lies the study of the correlation of linear approximate relations between the input and output of functions. Good linear approximations of ciphers are usually found heuristically by forming trails consisting of linear approximations of the components of the cipher. In order to search the space of linear trails, e.g. using a Branch-and-bound algorithm (see e.g. [5, 9, 1]), we need efficient methods for computing the correlation of linear approximations of the simplest components of the cipher, as well as methods for generating the relevant approximations of the components. Towards this goal, we study a few basic functions often used in block ciphers. Currently, block ciphers are usually build from local nonlinear mappings, global linear mappings, and arithmetic operations. The mixture of linear mappings and arithmetic operations seems fruitful, since they are suitable for software implementation, and their mixture is difficult to analyse mathematically. T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 261–273, 2003. c International Association for Cryptologic Research 2003
262
Johan Wall´en
While the latter property intuitively should make standard cryptanalysis intractable , it also makes it difficult to say something concrete about the security of the cipher. Perhaps the simplest arithmetic operations in wide use are addition and subtraction modulo 2n . Interestingly, good tools for studying linear approximations of even these simple mappings have not appeared in the literature to date. In this paper, we consider algorithms for two important problems for linear approximations of these operations: for computing the correlation of any given linear approximation and for generating all approximations with a correlation coefficient of a given absolute value. Our results are based on a fairly simple classification of the linear approximations of the carry function. Using this classification, we derive Θ(log n)-time algorithms for computing the correlation of of linear approximations of addition and subtraction modulo 2n in a standard RAM model of computation. The classification also gives optimal (that is, linear in the size of the output) algorithms for generating all linear approximations of addition or subtraction with a given non-zero correlation. In the generation algorithms, one or two of the selection vectors may optionally be fixed. As a simple corollary, we determine closed-form expressions for the distribution of the correlation coefficients. We hope that our result will facilitate advanced linear cryptanalysis of ciphers using modular arithmetic. Similar results with respect to differential cryptanalysis [2] are discussed in [7, 6]. The simpler case with one addend fixed is considered in [11] with respect to both linear and differential cryptanalysis. In the next section, we discuss linear approximations and some preliminary results. In Sect. 3, we derive our classification of linear approximations of the carry function, and the corresponding results for addition and subtraction. Using this classification, we then present the Θ(log n)-time algorithm for computing the correlation of linear approximations in Sect. 4, and the generation algorithms in Sect. 5.
2 2.1
Preliminaries Linear Approximations
Linear cryptanalysis [8] views (a part of) the cipher as a relation between the plaintext, the ciphertext and the key, and tries to approximate this relation using linear relations. The following standard terminology is convenient for discussing these linear approximations. Let f, g : IFn2 → IF2 be Boolean functions. The correlation between f and g is defined by c(f, g) = 21−n {x ∈ IFn2 | f (x) = g(x)} − 1 . This is simply the probability taken over x that f (x) = g(x) scaled to a value in n t [−1, 1]. Let u = (um−1 , . . . , u0 )t ∈ IFm 2 and w = (wn−1 , . . . , w0 ) ∈ IF2 be binary n m column vectors, and let h : IF2 → IF2 . Let w · x = wn−1 xn−1 + · · · + w1 x1 + w0 x0 denote the standard dot product. Define the linear function lw : IFn2 → IF2 by lw (x) = w · x for all w ∈ IFn2 . A linear approximation of h is an approximate
Linear Approximations of Addition Modulo 2n
263
relation of the form u · h(x) = w · x. Such a linear approximation will be denoted h
− w, or simply u ← − w when h is clear from context. by the formal expression u ← h Its efficiency is measured by its correlation C(u ← − w) defined by h
− w) = c(lu ◦ h, lw ) . C(u ← Here, u and w are the output and input selection vectors, respectively. 2.2
Fourier Analysis
There is a well-known Fourier-based framework for studying linear approximations [3]. Let f : IFn2 → IF2 be a Boolean function. The corresponding realvalued function fˆ: IFn2 → IR is defined by fˆ(x) = (−1)f (x) . With this notation, c(f, g) = 2−n x∈IFn fˆ(x)ˆ g (x). Note also that f + g ↔ fˆgˆ. Recall that an al2 gebra A over a field IF is a ring, such that A is a vector space over IF, and a(xy) = (ax)y = x(ay) for all a ∈ IF and x, y ∈ A. Definition 1. Let Bn = fˆ | f : IFn2 → IF2 be the real algebra generated by the n-variable Boolean functions. As usual, the addition, multiplication, and multiplication by scalars are given by (ξ + η)(x) = ξ(x) + η(x), (ξη)(x) = ξ(x)η(x) and (aξ)(x) = a(ξ(x)) for all ξ, η ∈ Bn and a ∈ IR. The algebra Bn is of course unital and commutative. The vector space Bn is turned into an inner product space by adopting the standard inner product for real-valued discrete functions. This inner product is defined by ξ, η = 2−n (ξη)(x) , ∀ξ, η ∈ Bn . x∈IFn 2
For Boolean functions, f, g : IFn2 → IF2 , fˆ, gˆ = c(f, g). Since the set of linear functions {ˆlw | w ∈ IFn2 } forms an orthonormal basis for Bn , every ξ ∈ Bn has a unique representation as αw ˆlw , where αw = ξ, ˆlw ∈ IR . ξ= w∈IFn 2
The corresponding Fourier transform F : Bn → Bn is given by F(ξ) = Ξ , where Ξ is the mapping w → ξ, ˆlw . This is usually called the Walsh-Hadamard transform of ξ. For a Boolean function f : IFn2 → IF2 , the Fourier transform Fˆ = F(fˆ) simply gives the correlation between f and the linear functions: Fˆ (w) = c(f, lw ). For ξ, η ∈ Bn , their convolution ξ ⊗ η ∈ Bn is given by ξ(x + t)η(t) . (ξ ⊗ η)(x) = t∈IFn 2
264
Johan Wall´en
Clearly, Bn is a commutative, unital real algebra also under convolution as multiplication. The unity is the function δ such that δ(0) = 1 and δ(x) = 0 for x = 0. As usual, the Fourier transform is an algebra isomorphism between the commutative, unital real algebras Bn , +, · and Bn , +, ⊗. Let f : IFn2 → IFm 2 be a Boolean function. Since the correlation of a linear f approximation of f is given by C(u ← − w) = F(l u f )(w), the correlation of linear approximations can conveniently be studied using the Fourier transform. Since lu f can be expressed as i:ui =1 fi , where fi denotes the ith component of f , we have the convolutional representation f
C(u ← − w) =
Fˆi ,
i:ui =1
where Fˆi = F(fˆi ). Especially when using the convolutional representation, it f
will be convenient to consider C(u ← − w) as a function of w with u fixed.
3 3.1
Linear Approximations of Addition Modulo 2n k-Independent Recurrences
We will take a slightly abstract approach to deriving algorithms for studying linear approximations of addition modulo 2n , since this approach might turn out to be useful also for some related mappings. The key to the algorithms are a certain class of k-independent recurrences. The name comes from the fact that they will be used to express the correlation of linear approximations of functions whose ith output bit is independent of the (i + k)th input bit an higher. We let ei ∈ IFn2 denote a vector whose ith component is 1 and the other 0. If x ∈ IFn2 , x denotes the component-wise complement of x: xi = xi + 1. Let eq : IFn2 × IFn2 → IFn2 be defined by eq(x, y)i = 1 if and only if xi = yi . That is, eq(x, y) = x + y. For x, y ∈ IFn2 , we let xy = (xn−1 yn−1 , . . . , x1 y1 , x0 y0 )t denote their component-wise product. Definition 2. A function f : IFn2 × IFn2 → IR is k-independent, if f (x, y) = 0 whenever xj = 0 or yj = 0 for some j ≥ k. Let r0 , r : IFn2 × IFn2 → IR be kindependent functions. A recurrence Ri = Rir0 ,r is k-independent, if it has the form R0 (x, y) = r0 (x, y) , and 1 i+k r(x , y) + r(x, y i+k ) + Ri (x, y) − Ri (xi+k , y i+k Ri+1 (x, y) = 2 for i > 0, where we for compactness have denoted z i+k = z + ei+k . Note that Rj is a k + j-independent function for all j. Note that k-independent recurrences can be efficiently computed, provided that we efficiently can compute the base cases r and r0 . The crucial observation is
Linear Approximations of Addition Modulo 2n
265
that at most one of the terms in the expression for Ri+1 is non-zero, and that we can determine which of the four terms might be non-zero by looking only at xi+k and yi+k . The four terms consider the cases (xi+k , yi+k ) = (1, 0), (0, 1), (0, 0), and (1, 1), respectively. This observation yields the following lemma. Lemma 1. Let Ri = Rir0 ,r be a k-independent recurrence. Then R0 (x, y) = r0 (x, y) , and 1 r(xei+k , yei+k ) , Ri+1 (x, y) = 21 xi+k Ri (xei+k , yei+k ) , 2 (−1)
if xi+k = yi+k and if xi+k = yi+k .
It turns out that the k-independent recurrences of interest can be solved by finding a certain type of common prefix of the arguments. Towards this end, we define the common prefix mask of a vector. Definition 3. The common prefix mask cpmki : IFn2 → IFn2 is for all j defined by cpmki (x)j = 1 if and only if k ≤ j < k + i and x = 1 for all j < < k + i. Let wH (x) = {i | xi = 0} denote the Hamming weight of x ∈ IFn2 . Lemma 2. Let Ri = Rir0 ,r be a k-independent recurrence. Denote r1 = r, and let z = cpmki (eq(x, y)), = wH (z) and s = (−1)wH (zxy) . Let b = 0, if xz = yz and let b = 1 otherwise. Then Ri (x, y) = s · 2 rb (xz, yz) . Proof. For i = 0, cpmk0 (eq(x, y)) = 0, = 0, s = 1, and b = 0. Thus, the lemma holds for i = 0, so consider i + 1. Let x = xei+k , y = yei+k , z = cpmki (eq(x , y )), = wH (z ), s = (−1)wH (z x y ) , and b = 0, if x z = y z and b = 1 otherwise. By Lemma 1, there are two cases to consider. If xi+k = yi+k , z = ei+k , = 1, s = 1, and b = 1. In this case s·2 rb (xz, yz) = 12 r(xei+k , yei+k ) = Ri+1 (x, y). If xi+k = yi+k , z = ei+k +z , = +1, s = s (−1)xi+k , and b = b . In this case, s·2 rb (xz, yz) = 12 (−1)xi+k ·s 2 rb (x z , y z ) = 12 (−1)xi+k Ri (x , y ) = Ri+1 (x, y). We will next consider the convolution of k-independent recurrences. Lemma 3. Let Ri = Riδ,δ be a 0-independent recurrence, and let f : IFn2 → IR be k-independent. Define Si = Ri+k ⊗ f , s = f , and s0 = Rk ⊗ f . Then Si = Sis0 ,s is a k-independent recurrence. Proof. Clearly, s0 and s are k-independent. Furthermore, S0 = Rk ⊗ f = s0 by definition. Finally, 2Si+1 (x, y) = 2R(i+k)+1 (x, y) ⊗ f (x, y) = (δ(xi+k , y) + δ(x, y i+k ) + Ri+k (x, y) − Ri+k (xi+k , y i+k )) ⊗ f (x, y) = f (xi+k , y) + f (x, y i+k ) + (Ri+k ⊗ f )(x, y) − (Ri+k ⊗ f )(xi+k , y i+k ), where we have used the notation z i+k = z + ei+k .
266
3.2
Johan Wall´en
Linear Approximations of the Carry Function
In this subsection, we derive a classification of the linear approximations of the carry function modulo 2n . It will turn out that the correlation of arbritrary linear approximations of the carry function can be expressed as a recurrence of the type studied in the previous subsection. We will identify the vectors in IFn2 and the elements in ZZ 2n using the natural correspondence (xn−1 , . . . , x1 , x0 )t ∈ IFn2 ↔ xn−1 2n−1 + · · · + x1 21 + x0 20 ∈ ZZ 2n . To avoid confusion, we sometimes use ⊕ and to denote addition in IFn2 and ZZ 2n , respectively. Definition 4. Let carry : IFn2 × IFn2 → IFn2 be the carry function for addition modulo 2n defined by carry(x, y) = x ⊕ y ⊕ (x y), and let ci = carryi denote the ith component of the carry function for i = 0, . . . , n − 1. Note that the ith component of the carry function can be recursively computed as c0 (x, y) = 0, and ci+1 (x, y) = 1 if and only if at least two of xi , yi and ci (x, y) are 1. By considering the 8 possible values of xi , yi and ci (x, y), we see that
cˆ0 (x, y) = 1 and cˆi+1 (x, y) = 12 (−1)xi + (−1)yi + cˆi (x, y) − (−1)xi +yi cˆi (x, y) . Thus we have Lemma 4. The Fourier transform of the carry function cˆi is given by the recurrence Cˆ0 (v, w) = δ(v, w) , and 1 δ(v + ei , w) + δ(v, w + ei ) + Cˆi (v, w) − Cˆi (v + ei , w + ei ) , Cˆi+1 (v, w) = 2 for i = 0, . . . , n − 1. Note that this indeed is a 0-independent recurrence. In the sequel, we will need a convenient notation for stripping off ones from the high end of vectors. Definition 5. Let x ∈ IFn2 and ∈ {0, . . . , n}. Define strip(x) to be the vector in IFn2 that results when the highest component that is 1 in x (if any) is set to 0. By convention, strip(0) = 0. Similarly, let strip(, x) denote the vector that results when all but the lowest ones in x have been set to zero. For example, strip(2, 1011101) = 0000101. Let u ∈ IFn2 and let {i | ui = 1} = {k1 , . . . , km } with k < k+1 . Define j0 = 0 and j+1 = k+1 − k for = 0, . . . , m − 1. Then carry
C(u ←−−− v, w) =
Cˆi (v, w) =
i:ui =1
m i=1
Define a sequence of recurrences S0,i , . . . , Sm,i by S0,i = δ , and S+1,i = Cˆi+k ⊗ S,j ,
Cˆki (v, w) .
Linear Approximations of Addition Modulo 2n
267
for = 0, . . . , m − 1. The crucial observation is that carry
S,j (v, w) = C(strip(, u) ←−−− v, w) carry
for all . Thus, C(u ←−−− v, w) = Sm,jm (v, w). Lemma 5. Let S,i , j , and k be as above. Define s , s by s1 = s1 = δ, and for s ,s
> 0 by s+1 = S,j and s+1 = s . Then S,i = S,i recurrence for all > 0, where k0 = 0.
is a k−1 -independent
s ,s
Proof. For = 1, the result is clear. If S,i = S,i is a k−1 -independent recurrence for some ≥ 1, then S,j is a j +k−1 = k -independent function. By f0 ,f is a k -independent recurrence with f = S,j = s+1 Lemma 3, S+1,i = S+1,i ˆ ˆ and f0 = Ck ⊗ S,j = Ck ⊗ (Cˆj +k−1 ⊗ S−1,j−1 ) = Cˆk ⊗ Cˆk ⊗ S−1,j−1 = S−1,j−1 = s+1 . For any function f , we let f 0 denote the identity function and f i+1 = f ◦ f i . Lemmas 2 and 5 now give Lemma 6. The correlation of any linear approximation of the carry function is carry given recursively as follows. First, C(0 ←−−− v, w) = δ(v, w). Second, if u = 0, let j ∈ {0, . . . , n − 1} be maximal such that uj = 1. If strip(u) = 0, let k be maximal such that strip(u)k = 1. Otherwise, let k = 0. Denote i = j − k. Let z = cpmki (eq(v, w)), = wH (z), and s = (−1)wH (zvw) . If vz = wz, set b = 2. Set b = 1 otherwise. Then carry
carry
C(u ←−−− v, w) = s · 2− C(stripb (u) ←−−− vz, wz) . Our next goal is to extract all the common prefix masks computed in the previous lemma, and combine them into a single common prefix mask depending on u. This gives a more convenient formulation of the previous lemma. Definition 6. The common prefix mask cpm : IFn2 × IFn2 → IFn2 is defined recursively as follows. First, cpm(0, y) = 0. Second, if x = 0, let j be maximal such that xj = 1. If strip(x) = 0, let k be maximal such that strip(x)k = 1. Otherwise, let k = 0. Denote i = j − k and z = cpmki (y) If zy = z, set b = 2. Set b = 1 otherwise. Then cpm(x, y) = cpmki (y) + cpm(stripb (x), y) . Theorem 1. Let u, v, w ∈ IFn2 , and let z = cpm(u, eq(v, w)). Then 0 , if vz = 0 or wz = 0, and carry C(u ←−−− v, w) = wH (vw) −wH (z) ·2 , otherwise. (−1) Since the only nonlinear part of addition modulo 2n is the carry function, it should be no surprise that the linear properties of addition completely reduce to those of the carry function. Subtraction is also straightforward. When we are − v, w, we are actually approximating approximating the relation xy = z by u ← − u, w. With this observation, it is trivial to prove the relation z y = x by v ←
268
Johan Wall´en
Lemma 7. Let u, v, w ∈ IFn2 . The correlations of linear approximations of addition and subtraction modulo 2n are given by
carry
carry
− v, w) = C(u ←−−− v + u, w + u) , and C(u ← C(u ← − v, w) = C(v ←−−− u + v, w + v) . Moreover, the mappings (u, v, w) → (u, v + u, w + u) and (u, v, w) → (v, u + v, w + v) are permutations in (IFn2 )3 .
4 4.1
The Common Prefix Mask RAM Model
We will use a standard RAM model of computation consisting of n-bit memory cells, logical and arithmetic operations, and conditional branches. Specifically, we will use bitwise and (∧), or (∨), exclusive or (⊕) and negation (·), logical shifts ( and ), and addition and subtraction modulo 2n ( and ). As a notational convenience, we will allow our algorithms to return values of the form s2−k , where s ∈ {0, 1, −1}. In our RAM model, this can be handled by returning s and k in two registers. 4.2
Computing cpm
To make the domain of cpm clear, we write cpmn = cpm : IFn2 × IFn2 → IFn2 . We will extend the definition of cpm to a 3-parameter version. Definition 7. Let cpmn : {0, 1} × IFn2 × IFn2 → IFn2 be defined by cpmn (b, x, y) = (zn−1 , . . . , z0 )t , where z = cpmn+1 ((b, x)t , (0, y)t ). Lemma 8 (Splitting lemma). Let n = k + with k, > 0. For any vector x ∈ IFn2 , let xL ∈ IFk2 and xR ∈ IF2 be such that x = (xL , xR )t . Then cpmn (x, y) = (cpmk (xL , y L ), cpm (b, xR , y R ))t , L L L where b = xL 0 if and only if (y0 , cpmk (x , y )0 ) = (1, 1).
Proof. Let w = wH (xL ) and z L = cpmk (xL , y L ). If w = 0, the result is trivial. L If w = 1 and xL 0 = 1, b = 1 and the result holds. If w = 1 and x0 = 0, b = 1 L L L if and only if z0 = 1 and y0 = 1. If w = 2 and x0 = 1, b = 0 if and only if z0L = 1 and y0L = 1. Finally, if w = 2 and xL 0 = 0, or w > 2, the result follows by induction. Using this lemma, we can easily come up with an Θ(log n)-time algorithm for computing cpmn (x, y). For simplicity, we assume that n is a power of two (if not, the arguments can be padded with zeros). The basic idea is to compute both cpmn (0, x, y) and cpmn (1, x, y) by splitting the arguments in halves, recursively compute the masks for the halves in parallel in a bit-sliced manner, and then combine the correct upper halves with the correct lower halves using the splitting lemma. Applying this idea bottom-up gives the following algorithm.
Linear Approximations of Addition Modulo 2n
269
Theorem 2. Let n be a power of 2, let α(i) ∈ IFn2 consist of blocks of 2i ones and zeros starting from the lest significant end (e.g. α(1) = 0011 · · · 0011), and let x, y ∈ IFn2 . The following algorithm computes cpm(x, y) using Θ(log n) time and constant space in addition to the Θ(log n) space used for the constants α(i) . 1. Initialise β = 1010 · · · 1010, z0 = 0, and z1 = 1. 2. For i = 0, . . . , log2 n − 1, do (a) Let γb = ((y ∧ zb ∧ x) ∨ (y ∧ zb ∧ x)) ∧ β for b ∈ {0, 1}. (b) Set γb ← γb (γb 2i ) for b ∈ {0, 1}. (c) Let tb = (zb ∧ α(i) ) ∨ (z0 ∧ γb ∧ α(i) ) ∨ (z1 ∧ γb ) for b ∈ {0, 1}. (d) Set zb ← tb for b ∈ {0, 1}. (e) Set β ← (β 2i ) ∧ α(i+1) . 3. Return z0 . Note that α(i) and the values of β used in the algorithm only depend on n. For convenience, we introduce the following notation. Let β (i) ∈ IFn2 be such that (i) β = 1 iff −2i is a non-negative multiple of 2i+1 (e.g. β (1) = 0100 · · · 01000100). For b ∈ {0, 1}, let z (i) (b, x, y) = (cpm2i (b, x(n/2 i
i
−1)
, y (n/2
i
−1)
), . . . , cpm2i (b, x(0) , y (0) ))t , i
where x = (x(n/2 −1) , . . . , x(0) )t and y = (y (n/2 −1) , . . . , y (0) )t . We also let x → y, z denote the function “if x then y else z”. That is, x → y, z = (x ∧ y) ∨ (x ∧ z). Proof (of Theorem 2). The algorithm clearly terminates in time Θ(log n) and uses constant space in addition to the masks α(i) . The initial value of β can also be constructed in logarithmic time. We show by induction on i that β = β (i) and zb = z (i) (b, x, y) at the start of the ith iteration of the for-loop. For i = 0, this clearly holds, so let i ≥ 0. Consider the vectors x, y and zb split into 2i+1 -bit blocks, and let x , y , and zb denote one of these blocks. After step 2a, γb, = (y ∧ zb, ) → x , x when − 2i is a multiple of 2i+1 , and γb, = 0 otherwise. Let ξ denote the bit of γb corresponding to the middle bit of the block under consideration. By induction and the splitting lemma, cpm(b, x , y ) = L L R R (cpm(b, x , y ), cpm(ξ, x , y ))t . After step 2b, a block of the form χ00 · · · 0 in γb has been transformed to a block of the form 0χχ · · · χ. In step 2c, the upper half of each block zb is combined with the corresponding lower half of the block zξ to give tb = cpm(b, x , y ). That is, tb = z (i+1) (b, x, y). Finally, β = β (i+1) after step 2e. Since the Hamming weight can be computed in time O(log n), we have the following corollary.
Corollary 1. Let u, v, w ∈ IFn2 . The correlation coefficients C(u ← − v, w) and
C(u ← − v, w) can be computed in time Θ(log n) (using the algorithm in Theorem 2 and the expressions in Theorem 1 and Lemma 7).
270
5
Johan Wall´en
Generating Approximations
In this section, we derive a recursive description of the linear approximations carry u ←−−− v, w with a given non-zero correlation coefficient. For simplicity, we only consider the absolute values of the correlation coefficients. The recursive description immediately gives optimal generation algorithms for the linear apcarry proximations. By Theorem 1, the magnitude of C(u ←−−− v, w) is either zero or 1 a power of 2 . Thus, we start by considering the set of vectors (u, v, w) ∈ (IFn2 )3 carry such that C(u ←−−− v, w) = ±2−k . carry We will use the splitting lemma to determine the approximations u ←−−− v, w with non-zero correlation and wH (cpmn (u, eq(v, w))) = k. Note that cpmn (x, y) = (cpmn−1 (xL , y L ), cpm1 (b, x0 , y0 ))t , L L L where b = xL 0 iff (y0 , cpmn−1 (x , y )0 ) = (1, 1). Now, cpm1 (b, x0 , y0 ) = 1 iff L b = 1 iff either x0 = 1 and (y0L , cpmn−1 (xL , y L )0 ) = (1, 1) or xL 0 = 0 and (y0L , cpmn−1 (xL , y L )0 ) = (1, 1). Let the {0, 1}-valued bn (x, y) = 1 iff x0 = 1 and (y0 , cpmn (x, y)0 ) = (1, 1) or x0 = 0 and (y0 , cpmn (x, y)0 ) = (1, 1), let − v, w) = ±2−k , bn (u, eq(v, w)) = 1}, and F (n, k) = {(u, v, w) ∈ (IFn2 )3 | C(u ← n 3 let G(n, k) = {(u, v, w) ∈ (IF2 ) | C(u ← − v, w) = ±2−k , bn (u, eq(v, w)) = 0}. n 3 Let A(n, k) = {(u, v, w) ∈ (IF2 ) | C(u ← − v, w) = ±2−k }. Then A(n, k) is formed from F (n − 1, k − 1) and G(n − 1, k) by appending any three bits to the approximations in F (n − 1, k − 1) (since u0 and eq(v, w)0 are arbitrary, and cpmn (u, eq(v, w))0 = 1) and by appending {(0, 0, 0), (1, 0, 0)} to the approximations in G(n − 1, k) (since u0 is arbitrary and cpmn (u, eq(v, w))0 = 0). Let S = {(0, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 0)}, T = {(0, 0, 1), (0, 1, 0), (1, 0, 0), (1, 1, 1)}, and denote y = eq(v, w). We denote concatenation simply by juxtaposition. The set F (n, k) can be divided into two cases.
1. The vectors with wH (cpmn−1 (uL , y L )) = k, bn−1 (uL , y L ) = 0, and bn (u, y) = 1. Since (u0 , y0 ) ∈ {(1, 0), (1, 1)} and cpmn (u, y)0 = 0, this set equals G(n − 1, k)(1, 0, 0). 2. The vectors with wH (cpmn−1 (uL , y L )) = k − 1, bn−1 (uL , y L ) = 1 and bn (x, y) = 1. Since (u0 , y0 ) ∈ {(0, 1), (1, 0)} and cpmn (u, y)0 = 1, this set equals F (n − 1, k − 1)S. That is, F (n, k) = G(n − 1, k)(1, 0, 0) ∪ F (n − 1, k − 1)S . Clearly, F (1, 0) = {(1, 0, 0)} and F (n, k) = ∅ when k < 0 or k ≥ n. Similarly, G(n, k) can be divided into two cases: 1. The vectors with wH (cpmn−1 (uL , y L )) = k, bn−1 (uL , y L ) = 0, and bn (u, y) = 0. Since (u0 , y0 ) ∈ {(0, 0), (0, 1)} and cpmn (u, y)0 = 0, this set equals G(n − 1, k)(0, 0, 0). 2. The vectors with wH (cpmn−1 (uL , y L )) = k − 1, bn−1 (uL , y L ) = 1 and bn (u, y) = 0. Since (u0 , y0 ) ∈ {(0, 0), (1, 1)} and cpmn (u, y)0 = 1, this set equals F (n − 1, k − 1)T .
Linear Approximations of Addition Modulo 2n
271
That is, G(n, k) = G(n − 1, k)(0, 0, 0) ∪ F (n − 1, k − 1)T . Clearly, G(1, 0) = {(0, 0, 0)} and G(n, k) = ∅ when k < 0 or k ≥ n. Theorem 3. Let A(n, k) = {(u, v, w) ∈ (IFn2 )3 | C(u ←−−− v, w) = ±2−k }. Then carry
A(n, k) = F (n − 1, k − 1)(IF2 × IF2 × IF2 ) ∪ G(n − 1, k){(0, 0, 0), (1, 0, 0)} , where F and G are as follows. Let S = {(0, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 0)} and T = {(0, 0, 1), (0, 1, 0), (1, 0, 0), (1, 1, 1)}. First, F (1, 0) = {(1, 0, 0)}, G(1, 0) = {(0, 0, 0)}, and F (n, k) = G(n, k) = ∅ when k < 0 or k ≥ n. Second, when 0 ≤ k < n, F (n, k) = G(n − 1, k)(1, 0, 0) ∪ F (n − 1, k − 1)S , and G(n, k) = G(n − 1, k)(0, 0, 0) ∪ F (n − 1, k − 1)T . Here, juxtaposition denotes concatenation. From this theorem, it can be seen that there are 8(n − 1) linear approximations carry u ←−−− v, w with correlation ± 12 . In the notation of formal languages, these are the 8 approximations of the form carry
0n−2 1a ←−−− 0n−2 0b, 0n−2 0c for arbritrary a, b, c ∈ {0, 1}, and the 8(n − 2) approximations of the form carry
0n−i−3 1d0i g ←−−− 0n−i−3 0e0i 0, 0n−i−3 0f 0i 0 for (d, e, f ) ∈ {(0, 0, 1), (0, 1, 0), (1, 0, 0), (1, 1, 1)}, g ∈ {0, 1} and i ∈ {0, . . . , n − 3}. The recursive description in Theorem 3 can easily be used to generate all linear approximations with a given correlation. The straightforward algorithm uses O(n) space and is linear-time in the number of generated approximations. Clearly, this immediately generalise to the case where one or two of the selection vectors are fixed. By Lemma 7, this also generalise to addition and subtraction modulo 2n . Corollary 2. The set of linear approximations with correlation ±2−k of the carry function, addition, or subtraction modulo 2n can be generated in optimal time (that is, linear in the size of the output) and O(n) space in the RAM model (by straightforward application of the recurrence in Theorem 3 and the expressions in Lemma 7). Moreover, one or two of the selection vectors can be optionally fixed. Theorem 3 can also be used to determine the distribution of the correlation coefficients.
272
Johan Wall´en
Corollary 3. Let N (n, k) = {(u, v, w) ∈ (IFn2 )3 | C(u ← − v, w) = ±2−k }. Then n−1 N (n, k) = 22k+1 k for all 0 ≤ k < n and N (n, k) = 0 otherwise. Thus, the number of linear approximations with non-zero correlation is 2 · 5n−1 . Proof. Based on Theorem 3, it is easy to see that 0 , N (n, k) = 2 , 4N (n − 1, k − 1) + N (n − 1, k) ,
if k < 0 or k ≥ n, if n = 1 and k = 0, and otherwise.
The claim clearly holds for n = 1. By = 4N (n − 1, k −
induction, N
(n, k)2k+1 n−1 2k+1 n−2 1) + N (n − 1, k) = 4 · 22(k−1)+1 n−2 + 2 = 2 k k . Finally, n−1 n−1 k k−1 n−1 n−1 . k=0 N (n, k) = 2 k=0 k 4 =2·5 If we let X be a random variable with the distribution Pr[X = k] = Pr [− log2 |C(u ← − v, w)| = k | C(u ← − v, w) = 0] , u,v,w
we see that
k n−1−k 1 n−1 4 5 5 k
4 k 1 n−1−k for all 0 ≤ k < n, since 2 · 5n−1 n−1 = 22k+1 n−1 k k . Thus, X is 5 5 4 binomially distributed with mean 45 (n − 1) and variance 25 (n − 1).
Pr[X = k] =
6
Conclusions
In this paper, we have considered improved algorithms for several combinatorial problems related to linear approximations of addition modulo 2n . Our approach might seem unnecessarily complicated considering the surprising simplicity of the results (especially Theorem 3), but should lead to natural generalisations to other recursively defined function. This generalisation and applications to block ciphers are, however, left to later papers. A reference implementation of the algorithms is available from the author.
Acknowledgements This work was supported by the Finnish Defence Forces Research Institute of Technology.
Linear Approximations of Addition Modulo 2n
273
References 1. Kazumaro Aoki, Kunio Kobayashi, and Shiho Moriai. Best differential characteristic search for FEAL. In Fast Software Encryption 1997, volume 1267 of LNCS, pages 41–53. Springer-Verlag, 1997. 2. Eli Biham and Adi Shamir. Differential Cryptanalysis of the Data Encryption Standard. Springer-Verlag, 1993. 3. Florent Chabaud and Serge Vaudenay. Links between differential and linear cryptanalysis. In Advances in Cryptology–Eurocrypt 1994, volume 950 of LNCS, pages 356–365. Springer-Verlag, 1995. 4. Joan Daemen. Cipher and Hash Function Design: Methods Based on Linear and Differential Cryptanalysis. PhD thesis, Katholieke Universiteit Leuven, March 1995. 5. E.L. Lawler and D.E. Wood. Branch-and-bound methods: a survey. Operations Research, 14(4):699–719, 1966. 6. Helger Lipmaa. On differential properties of Pseudo-Hadamard transform and related mappings. In Progress in Cryptology–Indocrypt 2002, volume 2551 of LNCS, pages 48–61. Springer-Verlag, 2002. 7. Helger Lipmaa and Shiho Moriai. Efficient algorithms for computing differential properties of addition. In Fast Software Encryption 2001, volume 2355 of LNCS, pages 336–350. Springer-Verlag, 2002. 8. Mitsuru Matsui. Linear cryptanalysis method for DES cipher. In Advances in Cryptology–Eurocrypt 1993, volume 765 of LNCS, pages 386–397. Springer-Verlag, 1993. 9. Mitsuru Matsui. On correlation between the order of S-boxes and the strength of DES. In Advances in Cryptology–Eurocrypt 1994, volume 950 of LNCS, pages 366–375. Springer-Verlag, 1995. 10. Mitsuru Matsui. New structure of block ciphers with provable security against differential and linear cryptanalysis. In Fast Software Encryption 1996, volume 1039 of LNCS, pages 205–218. Springer-Verlag, 1996. 11. Hiroshi Miyano. Addend dependency of differential/linear probability of addition. IEICE Trans. Fundamentals, E81-A(1):106–109, 1998. 12. Kaisa Nyberg. Linear approximations of block ciphers. In Advances in Cryptology– Eurocrypt 1994, volume 950 of LNCS, pages 439–444. Springer-Verlag, 1995. 13. Serge Vaudenay. Provable security for block ciphers by decorrelation. In STACS 1998, volume 1373 of LNCS, pages 249–275. Springer-Verlag, 1998.
Block Ciphers and Systems of Quadratic Equations Alex Biryukov and Christophe De Canni`ere Katholieke Universiteit Leuven, Dept. ESAT/SCD-COSIC Kasteelpark Arenberg 10 B–3001 Leuven-Heverlee, Belgium {alex.biryukov,christophe.decanniere}@esat.kuleuven.ac.be Abstract. In this paper we compare systems of multivariate polynomials, which completely define the block ciphers Khazad, Misty1, Kasumi, Camellia, Rijndael and Serpent in the view of a potential danger of an algebraic re-linearization attack. Keywords: Block ciphers, multivariate quadratic equations, linearization, Khazad, Misty, Camellia, Rijndael, Serpent.
1
Introduction
Cryptanalysis of block ciphers has received much attention from the cryptographic community in the last decade and as a result several powerful methods of analysis (for example, differential and linear attacks) have emerged. What most of these methods have in common is an attempt to push statistical patterns through as many iterations (rounds) of the cipher as possible, in order to measure non-random behavior at the output, and thus to distinguish a cipher from a truly random permutation. A new generation of block-ciphers (among them the Advanced Encryption Standard (AES) Rijndael) was constructed with these techniques in mind and is thus not vulnerable to (at least a straightforward application of) these attacks. The task of designing ciphers immune to these statistical attacks is made easier by the fact that the complexity of the attacks grows exponentially with the number of rounds of a cipher. This ensures that the data and the time requirements of the attacks quickly become impractical. A totally different generic approach is studied in a number of recent papers [5, 7], which attempt to exploit the simple algebraic structure of Rijndael. These papers present two related ways of constructing simple algebraic equations that completely describe Rijndael. The starting point is the fact that the only non-linear element of the AES cryptosystem, the S-box, is based on an inverse
The work described in this paper has been supported in part by the Commission of the European Communities through the IST Programme under Contract IST-199912324 and by the Concerted Research Action (GOA) Mefisto. F.W.O. Research Assistant, sponsored by the Fund for Scientific Research – Flanders (Belgium)
T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 274–289, 2003. c International Association for Cryptologic Research 2003
Block Ciphers and Systems of Quadratic Equations
275
function (chosen for its optimal differential and linear properties). This allows to find a small set of quadratic multivariate polynomials in input and output bits that completely define the S-box. Combining these equations, an attacker can easily write a small set of sparse quadratic equations (in terms of intermediate variables) that completely define the whole block-cipher. Building on recent progress in re-linearization techniques [4, 8] which provide sub-exponential algorithms to solve over-defined systems of quadratic (or just low degree) equations, Courtois and Pieprzyk [5] argue that a method called XSL might provide a way to effectively solve this type of equations and recover the key from a few plaintext-ciphertext pairs. The claimed attack method differs in several respects from the standard statistical approaches to cryptanalysis: (a) it requires only few known-plaintext queries; (b) its complexity doesn’t seem to grow exponentially with the number of rounds of a cipher. However, no practical attack of this type was demonstrated even on a small-scale example, so far. Research on such attacks is still at a very early stage, the exact complexity of this method is not completely understood and many questions concerning its applicability remain to be answered. In this paper we will not try to derive a full attack or calculate complexities. Our intention is merely to compare the expected susceptibility of different block ciphers to a hypothetical algebraic attack over GF (2) and GF (28 ). For this purpose we will construct systems of equations for the 128-bit key ciphers1 Khazad [3], Misty1 [9], Kasumi [10], Camellia-128 [2], Rijndael-128 [6] and Serpent-128 [1] and compute some properties that might influence the complexity of solving them.
2
Constructing Systems of Equations
The problem we are faced with is to build a system of multivariate polynomials which relates the key bits with one or two (in the case of the 64-bit block ciphers) plaintext-ciphertext pairs and which is as simple as possible. The main issue here is that we have to define what we understand by simple. Since we do not know the most efficient way of solving such systems of equations, our simplicity criterion will be based on some intuitive assumptions: 1. Minimize the total number of free terms (free monomials). This is the number of terms that remain linearly independent when considering the system as a linear system in monomials. For example, adding two linearly independent equations which introduce only one new monomial will reduce the number of free terms by one. In order to achieve this, we will try to: (a) Minimize the degree of the equations. This reduces the total number of possible monomials. (b) Minimize the difference between the total number of terms and the total number of (linearly independent) equations. This is motivated by the fact that each equation can be used to eliminate a term. 1
For ciphers which allow different key sizes, we will denote the 128-bit key version by appending “128” to the name of the cipher.
276
Alex Biryukov and Christophe De Canni`ere
2. Minimize the size of individual equations. This criterion arises from the observation that sparse systems are usually easier to solve. Note that point 1 already assures the “global sparseness” of the system and that point 2 adds some local sparseness if it is possible. Another criterion, which is used in [8] and [4], is to minimize the ratio between the total number of terms and the number of equations. This is equivalent to the criterion above when the system involves all terms up to a certain degree (as would be the case for a random quadratic system, for example). We believe, however, that this criterion is less natural in cases where the number of terms can be reduced, which is the case for the systems considered in this paper. The most straightforward way of constructing a system of equations for a block cipher is to derive equations for each individual component and to insert them in a single system. In the next subsections we will briefly discuss the contribution of each component.
2.1
S-Boxes
In most block ciphers, the S-boxes are the only source of nonlinearity and the equations describing them will be the main obstacle that prevents the system from being easily solved. For any S-box of practical size, one can easily generate a basis of linearly independent multivariate polynomials that spans the space of all possible equations between the input and the output bits. This is illustrated for a small example in Appendix A.1. In this space we would like to find a set of equations that is as simple as possible (according to our criterion), but still completely defines the S-box. In some cases, this optimal set of equations might be an over-defined system2 . Performing an exhaustive search over all possible sets of equations is infeasible, even for small S-boxes. In this paper, we will therefore restrict our search to systems consisting only of equations from the basis. It appears that this restriction still produces sufficiently simple systems for small S-boxes, although the results rapidly deteriorate when the size of the S-boxes increases. Fortunately, many large S-boxes used in practice are derived from simple algebraic functions, and this usually directly leads to simple polynomial systems (see Sect. 3.2, for example). Nothing guarantees however that these systems are optimal and the results derived in this paper should therefore be considered as an approximation only. An efficient way of finding optimal systems for arbitrary S-boxes is still an interesting open problem. 2
In this paper, we do not consider “over-definedness” to be a criterion on itself. The reason is that it is not clear whether an over-defined system with a lot of free terms should be preferred over a smaller, defined system with less free terms. We note however that the systems of all S-boxes studied below can easily be made overdefined, should the solving algorithm require it.
Block Ciphers and Systems of Quadratic Equations
2.2
277
FL-Blocks
Both Misty1 and Camellia-128 contain an additional nonlinear component called F L-block. It is a function of an input word X and a key word K and it is defined as YR = XR ⊕ [(XL ∩ KL ) ≪ s] YL = XL ⊕ (YR ∪ KR )
(1) (2)
with X, Y and K 2w-bit words. The constant s is 0 for Misty1 and 1 for Camellia-128 and the word size w is 16 and 32 for Misty1 and Camellia-128 respectively. The definition above can directly be translated into a system of quadratic equations in GF (2): yR,i = xR,i + yL,j · kL,j yL,i = xL,i + yR,i + kR,i + yR,i · kR,i
(3) (4)
for 0≤i<w j = i − s mod w
(5) (6)
What should be remembered is that the 2w nonlinear equations of this system contain 6w linear and 2w quadratic terms. 2.3
Linear Layers
The linear layers of a block cipher consist of both linear diffusion layers and key additions. Writing a system of equations for these layers is very straightforward. The number of equations and the number of terms in such a linear system are fixed and the only property we can somewhat control is the density of the individual equations, for example by combining equations that have many terms in common. In cases where a linear layer only consists of bit permutations, we will not insert separate equations, but just rename the variables accordingly.
3
Comparison of Block Ciphers
The block ciphers we intend to compare are Khazad, Misty1, Kasumi, Camellia-128, Rijndael-128 and Serpent-128. The characteristics of these ciphers are summarized in Table 1 and Table 2. For each of them we will construct a system of equations as described in the previous section, and compute the total number of terms and the number of equations. The final results are listed in Table 3.
278
Alex Biryukov and Christophe De Canni`ere Table 1. Characteristics of the ciphers (without key schedule).
Block size Key size Rounds S-boxes S-box width Nonlinear ordera F L-blocks F L-block width a
Khazad Misty1 Camellia-128 Rijndael-128 Serpent-128 64 bit 64 bit 128 bit 128 bit 128 bit 128 bit 128 bit 128 bit 128 bit 128 bit 8 8 18 10 32 384/64 24 + 48 144 160 1024 4/8 bit 7 bit and 9 bit 8 bit 8 bit 4 bit 3/7 3 and 2 7 7 2 10 4 32 bit 64 bit -
The non-linear order of an S-box is the minimal degree required to express a linear combination of output bits as a polynomial function of input bits. Table 2. Characteristics of the key schedules.
Key size Expanded key S-boxes S-box width
Khazad Misty1 Camellia-128 Rijndael-128 Serpent-128 128 bit 128 bit 128 bit 128 bit 128 bit 576 bit 256 bit 256 bit 1408 bit 4224 bit 432/72 8 + 16 32 40 1056 4/8 bit 7 bit and 9 bit 8 bit 8 bit 4 bit
3.1 KHAZAD Khazad [3] is a 64-bit block cipher designed by P. Barreto and V. Rijmen, which accepts a 128-bit key. It consists of an 8-round SP-network that makes use of an 8-bit S-box and a 64-bit linear transformation based on an MDS code. The 8-bit S-box is in turn composed of 3 times two 4-bit S-boxes called P and Q, which are connected by bit-permutations. The key schedule is a 9-round Feistel iteration based on the same round function as in the cipher itself. For both P and Q we can generate 21 linearly independent quadratic equations in 36 terms (excluding ‘1’). As discussed before, we want to find a subset of these equations which forms a system that is as simple as possible. A possible choice can be found in Appendix A. The system describing P consists of 4 equations in 16 terms. For Q, the best set we found is an over-defined system of 6 equation in 18 terms. Constructing the System Cipher Variables. The variables of the system are the inputs and outputs of the 4-bit S-boxes. After combining six 4-bit S-boxes into a 8-bit S-box and renaming the variables according to the bit-permutations, we obtain a system in 32 variables. This is repeated for each of the 64 8-bit S-boxes. Linear Equations. The S-box layers are connected to each other and to the (known) plaintext and ciphertext by 9 64-bit linear layers, each including a key addition. Their contributions in the system are 9 times 64 linear equations.
Block Ciphers and Systems of Quadratic Equations
279
Nonlinear Equations. The only nonlinear equations are the S-box equations. For each 8-bit S-box we obtain a system of 30 equations in 32 linear and 54 quadratic terms. Note that a better system may be obtained if one optimizes the sets of the equations, used in the neighboring 4-bit S-boxes, trying to maximize the number of common terms between the two S-boxes. Key Schedule Variables. In order to take the key schedule into account, we need to include extra variables for the first 64 bits of the key (called K −2 ) and for the inputs and outputs of the 4-bit S-boxes in the key schedule (which will include the 64 variables for K −1 ). Linear Equations. The linear layers between the 9 S-box layers in the key schedule define 8 times 64 linear equations. Nonlinear Equations. Again, the nonlinear equations are just the S-box equations. The number of equations and terms in the final system are listed in Table 3. Note that we need to build a system for two different 64-bit plaintext-ciphertext pairs in order to be able to solve the system for the 128-bit key. 3.2
MISTY1 and KASUMI
Misty1 [9] is a 64-bit block and 128-bit key block cipher designed by M. Matsui. It is an 8-round Feistel-network based on a 32-bit nonlinear function. Additionally, each pair of rounds is separated by a layer of two 32-bit F L-blocks. The nonlinear function itself has a structure that shows some resemblances with a Feistel-network and contains three 7-bit S-boxes S7 , and six 9-bit S-boxes S9 . The key schedule derives an additional 128-bit key K from the original key K by applying a circular function which contains 8 instances of S7 and 16 of S9 . The 9-bit S-box S9 is designed as a system of 9 quadratic equations in 54 terms (derived from the function y = x5 in GF (29 )) and this is probably the best system according to our criterion. The 7-bit S-box S7 is defined by 7 cubic equations in 65 terms (see Appendix A), but it appears that its input and output bits can also be related by a set of 21 quadratic equations in 105 terms. In this set, our search program is able to find a subset of 11 equations in 93 terms which completely defines the S-box. The substitution S7 was not selected at random, however, as it was designed to be linearly equivalent with the function y = x81 in GF (27 ). This information allows to derive a much simpler system of quadratic equations: raising the previous expression to the fifth power, we obtain y 5 = x405 = x24 , which can be written as y · y 4 = x8 · x16 . This last equation is quadratic, since y 4 , x8 and x16 are linear functions in GF (27 ). Next, we express x and y as a linear function of the input and output bits of S7 and obtain a system of 7 quadratic equations in
280
Alex Biryukov and Christophe De Canni`ere
56 terms over GF (2) 3 . This is a considerable improvement compared to what our naive search program found. Kasumi [10] is a 64-bit block and 128-bit key block cipher derived from Misty1. It is built from the same components and uses S-boxes which are based on the same power functions and are equivalent up to an affine transform. Its structure is slightly different: relevant for the derivation of the equations is the fact that it uses 24 more instances of the 7-bit S-boxes, that the placement of its 8 F L-blocks is different, and that the key schedule is completely linear. Constructing the System for MISTY1 Cipher Variables. The variables of the system are chosen to be the inputs and outputs of the S-boxes, the output bits of the first pair of F L-blocks, both the input and the output bits of the 6 inner F L-blocks and the input of the last pair of F L-blocks. We expect however that the variables for some of the output bits of the first F L-blocks can be replaced by constants, as about 3/8 of them will only depend on the (known) plaintext bits. This is also true for the output bits of the last F L-blocks. Linear Equations. These include all linear relations between different S-boxes and between F L-blocks and S-boxes. Moreover, 3/8 of the equations of the outer F L-blocks will turn out to be linear. The total number of linear equations can be found in Table 3. Nonlinear Equations. Apart from the nonlinear equations of the S-boxes, we should also include the equations of the inner F L-blocks and equations of the outer F L-blocks that remained nonlinear (about 1/4). Key Schedule Variables. The additional variables in the key schedule are the input and output bits of the S-boxes (which include K) and the 128 bits of K . Linear Equations. The linear equations are formed by the additions after each S-box. Nonlinear Equations. These are just the S-box equations. In the same manner, we can derive a system for Kasumi. The final results are listed in Table 3 and here again we need two plaintext/ciphertext pairs if we want to be able to extract the 128-bit key. 3.3
CAMELLIA-128
The 128-bit block cipher Camellia-128 [2] was designed by K. Aoki et al. It is based on an 18-round Feistel-network with two layers of two 64-bit F L-blocks 3
Fourteen more quadratic equations can be derived from the relations x64 ·y = x2 ·x16 and x · y 64 = y 2 · y 4 , but these would introduce many new terms.
Block Ciphers and Systems of Quadratic Equations
281
after the 6th and the 12th round. The 64-bit nonlinear function used in the Feistel iteration is composed of a key addition, a nonlinear layer of eight 8-bit S-boxes and a linear transform. An additional 128-bit key KA is derived from the original key KL by applying four Feistel iterations during the key scheduling. Camellia-128 uses four different S-boxes which are all equivalent to an inversion over GF (28 ) up to an affine transform. As pointed out in [5], the input and output bits of such S-boxes are related by 39 quadratic polynomials in 136 terms. By taking the subset of equations that does not contain any product of two input or two output bits, one can build a system of 23 quadratic equations in 80 terms (excluding the term ‘1’), which is probably the optimal system. Constructing the System Cipher Variables. The variables of the system will be the inputs and outputs of all S-boxes and F L-blocks. Linear Equations. All linear layers between different S-boxes and between F L-blocks and S-boxes will insert linear equations in the system. Their exact number is given in Table 3. Nonlinear Equations. The only nonlinear equations are the equations of the S-box and the F L-block. Key Schedule Variables. The key schedule adds variables for the inputs and outputs of its 32 S-boxes and for KL and KA . Linear Equations. These are all linear layers between different S-boxes or between an S-box and the bits of KL and KA . Nonlinear Equations. Again, the only nonlinear equations included in the system are the S-box equations. 3.4
RIJNDAEL-128
Rijndael-128 [6] is a 128-bit block cipher designed by J. Daemen and V. Rijmen and has been adopted as the Advanced Encryption Standard (AES). It is a 10round SP-network containing a nonlinear layer based on a 8-bit S-box. The linear diffusion layer is composed of a byte permutation followed by 4 parallel 32-bit linear transforms. The 128-bit key is expanded recursively by a key scheduling algorithm which contains 40 S-boxes. The 8-bit S-box is an affine transformation of the inversion in GF (28 ), and as mentioned previously, it is completely defined by a system of 23 quadratic equations in 80 terms.
282
Alex Biryukov and Christophe De Canni`ere
Constructing the System Cipher Variables. As variables we choose the input and output bits of of each of the 160 S-boxes. Linear Equations. Each of the 11 linear layers (including the initial and the final key addition) correspond to a system of 128 linear equations. Nonlinear Equations. These are just the S-box equations. Key Schedule Variables. The variables in the key schedule are the inputs and outputs of the S-boxes (which include the bits of W3 ) and the bits of W0 , W1 and W2 . Linear Equations. The linear equations are all linear relations that relate the input of the S-boxes to the output of another S-boxes or to W0 , W1 and W2 . Nonlinear Equations. The S-box equations. 3.5
SERPENT-128
Serpent-128 [1] is a 128-bit block cipher designed by R. Anderson, E. Biham and L. Knudsen. It is a 32-round SP-network with nonlinear layers consisting of 32 parallel 4-bit S-boxes and a linear diffusion layer composed of 32-bit rotations and XORs. In the key schedule, the key is first linearly expanded to 33 128-bit words and then each of them is pushed through the same nonlinear layers as the ones used in the SP-network. Serpent-128 contains 8 different S-boxes and for each of them we can generate 21 linearly independent quadratic equations in 36 terms. The best subsets of equations we found allow us to build systems of 4 equations in 13 terms for S0 , S1 , S2 and S6 . S4 and S5 can be described by a system of 5 equations in 15 terms, and S3 and S7 by a system of 5 equations in 16 terms. As an example, we included the systems for S0 and S1 in Appendix A. Constructing the System Cipher Variables and Equations. The system for the SP-network of Serpent-128 is derived in exactly the same way as for Rijndael-128. The results are shown in Table 3. Key Schedule Variables. The variables of the system are the inputs and outputs of the S-boxes, i.e. the bits of the linearly expanded key and the bits of the final round keys. Linear Equations. These are the linear equations relating the bits of linearly expanded key. Nonlinear Equations. These are the S-box equations.
Block Ciphers and Systems of Quadratic Equations
4
283
Equations in GF (28 ) for RIJNDAEL and CAMELLIA
In [7], S. Murphy and M. Robshaw point out that the special algebraic structure of Rijndael allows the cipher to be described by a very sparse quadratic system over GF (28 ). The main idea is to embed the original cipher within an extended cipher, called BES (Big Encryption System), by replacing each byte a by its 8 8-byte vector conjugate in GF (28 ) : 0 1 2 3 4 5 6 7 a2 , a2 , a2 , a2 , a2 , a2 , a2 , a2 (7) The advantage of this extension is that it reduces all transformations in Rijndael to very simple GF (28 ) operations 8-byte vectors, regardless of whether 8 these transformations were originally described in GF (28 ) or in GF (2) . The parameters of the resulting system in GF (28 ) are listed in Table 4. Note that the results are in exact agreement with those given in [7], though the equations and terms are counted in a slightly different way. The fact that Camellia uses the same S-box as Rijndael (up to a linear equivalence) suggests that a similar system can be derived for this cipher as well. The only complication is the construction of a system in GF (28 ) for the F L-blocks. This can be solved as follows: first we multiply each 8-byte vector at j −1 the input of the F L-blocks by an 8 × 8-byte matrix [bi,j ] where bi,j = xi·2 are elements of GF (28 ) written as polynomials. The effect of this multiplication is that vectors constructed according to (7) are mapped to an 8-byte vector consisting only of 0’s and 1’s, corresponding to the binary representation of the original byte a. This implies that we can reuse the quadratic equations given in Section 2.2, but this time interpreted as equations in GF (28 ). Finally, we return to vector conjugates by multiplying the results by the matrix [bi,j ]. Note that these additional linear transformations before and after the F L-blocks do not introduce extra terms or equations, as they can be combined with the existing linear layers of Camellia. Table 4 compares the resulting system with the one obtained for Rijndael and it appears that both systems have very similar sizes. We should note, however, that certain special properties of BES, for example the preservation of algebraic curves, do not hold for the extended version of Camellia, because of the F L-blocks.
5
Interpretation of the Results, Conclusions
In this section we analyze the results in Tables 3 and 4, which compare the systems of equations generated by the different ciphers. Each table is divided into three parts: the first describes the structure of the cipher itself, the second describes the structure of the key-schedule and the third part provides the total count. Each part shows the number of variables, the number of equations (we provide separate counts for non-linear and linear equations) and the number of terms (with separate counts for quadratic and linear terms). The bottom line
284
Alex Biryukov and Christophe De Canni`ere
Table 3. A comparison of the complexities of the systems of equations in GF (2). Cipher Variables Linear eqs. Nonlinear eqs. Equations Linear terms Quadratic terms Terms
Khazad (×2) 2048 576 1920 2496 2048 3456 5504
Misty1 (×2) 1664 904 824 1728 1664 2960 4624
Key schedule Variables Linear eqs. Nonlinear eqs. Equations Linear terms Quadratic terms Terms
2368 512 2160 2672 2368 3888 6256
528 200 200 400 528 912 1440
Kasumi Camellia-128 Rijndael-128 Serpent-128 (×2) 2004 2816 2560 8192 1068 1536 1408 4224 3568 3680 4608 1000 2068 5104 5088 8832 2004 2816 2560 8192 9472 10240 6400 3976 5980 12288 12800 14592 256 128 0 128 256 0 256
Total Variables 6464 3856 4264 Linear eqs. 1664 2008 2264 Nonlinear eqs. 6000 1848 2000 Equations 7664 3856 4264 Linear terms 6464 3856 4264 Quadratic terms 10800 6832 7952 Terms 17264 10688 12216 Free terms
9600
6832
7952
768 384 736 1120 768 2048 2816
736 288 920 1208 736 2560 3296
8448 4096 4752 8848 8448 6600 15048
3584 1920 4304 6224 3584 11520 15104
3296 1696 4600 6296 3296 12800 16096
16640 8320 9360 17680 16640 13000 29640
8880
9800
11960
provides the number of free terms, which is the difference between the total number of terms and the total number of equations. Note that for 64-bit block ciphers we have to take two plaintext-ciphertext queries in order to be able to find a solution for the 128-bit key. For such ciphers the numbers provided in the “Cipher” part of the table have to be doubled, in order to get the complete number of equations, terms and variables. Note however, that one does not have to double the number of equations for the keyschedule, which are the same for each encrypted block. The total for 64-bit block ciphers shows the proper total count: twice the numbers of the cipher part plus once the numbers from the key-schedule part. After comparing systems of equations for different ciphers we notice that the three ciphers in the 128-bit category (Camellia-128, Rijndael-128, Serpent-128) don’t differ drastically in the number of free terms. Camellia-128 and Rijndael-128 in particular have very similar counts of variables, equations and terms, both in GF (2) and GF (28 ) which is in part explained by the fact that they use the same S-box and have approximately equivalent number of rounds (Rijndael-128 has 10 SPN rounds and Camellia-128 has 18 Feistel rounds which is “equivalent” to 9 SPN rounds). On the other hand Serpent-128 has 3–4 times more SPN rounds than Camellia-128 and
Block Ciphers and Systems of Quadratic Equations
285
Table 4. A comparison of the complexities of the systems of equations in GF (28 ). Cipher Linear eqs. Nonlinear eqs. Equations Linear terms Quadratic terms Terms Key schedule Variables Linear eqs. Nonlinear eqs. Equations Linear terms Quadratic terms Terms Total Variables Linear eqs. Nonlinear eqs. Equations Linear terms Quadratic terms Terms Free terms
Camellia-128 Rijndael-128 2816 2560 1536 1408 1408 1280 2944 2688 2816 2560 1408 1280 4224 3840 768 384 256 640 768 256 1024
736 288 320 608 736 320 1056
3584 1920 1664 3584 3584 1664 5248 1664
3296 1696 1600 3296 3296 1600 4896 1600
Rijndael-128, and this explains why its system of equations has approximately 3–4 times more variables (at least in the cipher part). The number of quadratic terms in Serpent-128 is much smaller due to the 4-bit S-boxes, and this keeps the number of free terms within the range of the other two ciphers. When one switches to GF (28 ) for Camellia-128 and Rijndael-128, the equations for the S-boxes are much sparser and the number of free terms is reduced. It is not clear however how to make a fair comparison between systems in GF (2) and GF (28 ). The number of free terms in Table 4 is several times smaller, but then again, working in a larger field might increase the complexity of the solving algorithm. Comparing Khazad and Misty1, both of which consist of 8 rounds, one may notice that Khazad has approximately twice more variables and equations (due to the fact that we write equations for the intermediate layers of the Sbox4 ). On the other hand the number of quadratic terms per nonlinear equation is considerably higher for Misty1 due to its larger S-boxes. 4
There are no quadratic equations for the full 8-bit S-box of Khazad and for the original S-box (before the tweak). There are many cubic equations, which is true for any 8-bit S-box.
286
Alex Biryukov and Christophe De Canni`ere
Finally note, that the results presented in this paper are very sensitive to the criteria used for the choice of the equations. Our main criterion was to minimize the number of free terms, for example we did not aim to find the most over-defined systems of equations. At the moment of this writing one can only speculate what would be the criteria for a possible algebraic attack (if such attack will be found to exist at all).
Acknowledgements We wish to thank the anonymous referees, whose comments helped to improve this paper. We would also like to thank Jin Hong for pointing out an error in a previous version of this paper.
References 1. R. Anderson, E. Biham, and L. Knudsen, “Serpent: A proposal for the advanced encryption standard.” Available from http://www.cl.cam.ac.uk/˜rja14/serpent. html. 2. K. Aoki, T. Ichikawa, M. Kanda, M. Matsui, S. Moriai, J. Nakajima, and T. Tokita, “Camellia: A 128-bit block cipher suitable for multiple platforms.” Submission to NESSIE, Sept. 2000. Available from http://www.cryptonessie.org/workshop/ submissions.html. 3. P. Barreto and V. Rijmen, “The KHAZAD legacy-level block cipher.” Submission to NESSIE, Sept. 2000. Available from http://www.cryptonessie.org/workshop/ submissions.html. 4. N. Courtois, A. Klimov, J. Patarin, and A. Shamir, “Efficient algorithms for solving overdefined systems of multivariate polynomial equations,” in Proceedings of Eurocrypt’00 (B. Preneel, ed.), no. 1807 in Lecture Notes in Computer Science, pp. 392–407, Springer-Verlag, 2000. 5. N. Courtois and J. Pieprzyk, “Cryptanalysis of block ciphers with overdefined systems of equations,” in Proceedings of Asiacrypt’02 (Y. Zheng, ed.), no. 2501 in Lecture Notes in Computer Science, Springer-Verlag, 2002. Earlier version available from http://www.iacr.org. 6. J. Daemen and V. Rijmen, “AES proposal: Rijndael.” Selected as the Advanced Encryption Standard. Available from http://www.nist.gov/aes. 7. S. Murphy and M. Robshaw, “Essential algebraic structure within the AES,” in Proceedings of Crypto’02 (M. Yung, ed.), no. 2442 in Lecture Notes in Computer Science, pp. 17–38, Springer-Verlag, 2002. NES/DOC/RHU/WP5/022/1. 8. A. Shamir and A. Kipnis, “Cryptanalysis of the HFE public key cryptosystem,” in Proceedings of Crypto’99 (M. Wiener, ed.), no. 1666 in Lecture Notes in Computer Science, pp. 19–30, Springer-Verlag, 1999. 9. E. Takeda, “Misty1.” Submission to NESSIE, Sept. 2000. Available from http: //www.cryptonessie.org/workshop/submissions.html. 10. Third Generation Partnership Project, “3GPP KASUMI evaluation report,” tech. rep., Security Algorithms Group of Experts (SAGE), 2001. Available from http: //www.3gpp.org/TB/other/algorithms/KASUMI_Eval_rep_v20.pdf.
Block Ciphers and Systems of Quadratic Equations
A
287
Equations
A.1
Constructing a System – Example
This appendix illustrates how a set of linearly independent S-box equations can be derived for a small example. The n-bit S-box considered here is a 3-bit substitution defined by the following lookup table: [7, 6, 0, 4, 2, 5, 1, 3]. In order to find all linearly independent equations involving a particular set of terms (in this example all input and output bits xi and yj together with their products xi yj ), we first construct a matrix containing a separate row for each term. Each row consists of 2n entries, corresponding to the different values of the particular term for all possible input values (in this case from 0 to 7). The next step is to perform a Gaussian elimination on the rows of the matrix and all row operations required by this elimination are applied to the corresponding terms as well (see below). This way, a number of zero rows might appear and it is easy to see that the expressions corresponding to these rows are exactly the equations we are looking for. Note also that this set of equations forms a linearly independent basis and that any linear combination is also a valid equation.
1 x0 x1 x2 y0 y1 y2 x0 y0 x0 y1 x0 y2 x1 y0 x1 y1 x1 y2 x2 y0 x2 y1 x2 y2
A.2
1 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0
1 1 0 0 0 1 1 0 1 1 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 1 0 0 0 1 0 0 1 0 0 1 0 0 0
1 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0
1 1 0 1 1 0 1 1 0 1 0 0 0 1 0 1
1 0 1 1 1 0 0 0 0 0 1 0 0 1 0 0
1 1 1 1 1 1 0 1 → 1 0 1 1 0 1 1 0
1 x0 x1 1 + x0 + x1 + y0 x2 1 + x1 + y1 1 + x0 + x1 + y0 + y1 + y2 x0 + x0 y2 x2 + y0 + y1 + x0 y1 1 + x1 + y1 + x0 y0 1 + x0 + x1 + y0 + y1 + y2 + x1 y0 x0 + x0 y2 + x1 y1 1 + x1 + x2 + y0 + x0 y2 + x1 y2 y0 + y2 + x0 y2 + x2 y0 x0 + x2 + y0 + y2 + x2 y1 1 + x0 + x1 + y1 + x0 y2 + x2 y2
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
KHAZAD
P : 4 quadratic equations in 16 terms: x0 y1 + y1 + x0 y2 + x0 y3 + x2 y3 = x0 + x2 + x3 + 1 x3 y1 + x0 y2 + y2 + x0 y3 = x0 + x1 + x2 + x3 x0 y1 + x3 y2 + y2 + x0 y3 + y3 = x2 + x3 y0 + x0 y2 + x0 y3 + y2 y3 + y3 = x0 x3 + x1 + 1
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
1 1 0 1 1 1 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 0 1 0 0 0 0 0 0 0 0 0
1 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0
288
Alex Biryukov and Christophe De Canni`ere
Q: 6 quadratic equations in 18 terms: y0 + x0 y1 + y1 + x0 y2 = x0 x2 + x1 x2 + x3 + 1 x1 y0 + y1 + x0 y2 + y3 = x0 x1 + x0 x2 + x2 + x3 + 1 x1 y0 + y0 + x0 y1 + y2 + x3 y3 = x0 x1 + x0 + x2 y0 y1 + y0 + x0 y1 + x0 y2 + y3 = x0 x2 + x0 + x1 + x2 + x3 y0 y2 + y1 + y2 + y3 = x0 x2 + x1 + x2 + x3 + 1 y0 + y1 y2 + y1 + y2 = x0 x1 + x1 + x3 + 1
A.3
MISTY1
Table 5 shows the original systems of equations which completely define the S-boxes S7 and S9 of Misty1. We omit the quadratic system of 7 equations in 56 terms used for S7 in Section 3.2 because of its complexity. A.4
SERPENT-128
S0 : 4 quadratic equations in 13 terms: y3 = x0 x3 + x0 + x1 + x2 + x3 y0 + y1 = x0 x1 + x1 x3 + x2 + x3 y0 y3 + y1 y3 + y1 = x0 + x2 + x3 + 1 y0 y3 + y0 + y2 + y3 = x0 x1 + x1 + x3 + 1 S1 : 4 quadratic equations in 13 terms: x3 = y0 + y1 y3 + y2 + y3 x0 + x1 = y0 y1 + y0 y3 + y0 + y2 + y3 + 1 x0 x3 + x1 x3 + x0 + x3 = y1 + y2 + y3 + 1 x0 x3 + x2 + x3 = y0 y3 + y1 + y3 + 1
y8 = x0 + x0 x1 + x1 x2 + x4 + x0 x5 + x2 x5 + x3 x6 + x5 x6 + x0 x7 + x0 x8 + x3 x8 + x6 x8 + 1
y7 = x1 + x0 x1 + x1 x2 + x2 x3 + x0 x4 + x5 + x1 x6 + x3 x6 + x0 x7 + x4 x7 + x6 x7 + x1 x8 + 1
y6 = x0 x1 + x3 + x1 x4 + x2 x5 + x4 x5 + x2 x7 + x5 x7 + x8 + x0 x8 + x4 x8 + x6 x8 + x7 x8 + 1
y5 = x2 + x0 x3 + x1 x4 + x3 x4 + x1 x6 + x4 x6 + x7 + x3 x7 + x5 x7 + x6 x7 + x0 x8 + x7 x8
y4 = x1 + x0 x3 + x2 x3 + x0 x5 + x3 x5 + x6 + x2 x6 + x4 x6 + x5 x6 + x6 x7 + x2 x8 + x7 x8
y3 = x0 + x1 x2 + x2 x4 + x5 + x1 x5 + x3 x5 + x4 x5 + x5 x6 + x1 x7 + x6 x7 + x2 x8 + x4 x8
y2 = x0 x1 + x1 x3 + x4 + x0 x4 + x2 x4 + x3 x4 + x4 x5 + x0 x6 + x5 x6 + x1 x7 + x3 x7 + x8
y1 = x0 x2 + x3 + x1 x3 + x2 x3 + x3 x4 + x4 x5 + x0 x6 + x2 x6 + x7 + x0 x8 + x3 x8 + x5 x8 + 1
y0 = x0 x4 + x0 x5 + x1 x5 + x1 x6 + x2 x6 + x2 x7 + x3 x7 + x3 x8 + x4 x8 + 1
S9 : 9 quadratic equations in 54 terms:
y6 = x0 x1 + x3 + x0 x3 + x2 x3 x4 + x0 x5 + x2 x5 + x3 x5 + x1 x3 x5 + x1 x6 + x1 x2 x6 + x0 x3 x6 + x4 x6 + x2 x5 x6
y5 = x0 + x1 + x2 + x0 x1 x2 + x0 x3 + x1 x2 x3 + x1 x4 + x0 x2 x4 + x0 x5 + x0 x1 x5 + x3 x5 + x0 x6 + x2 x5 x6
y4 = x2 x3 + x0 x4 + x1 x3 x4 + x5 + x2 x5 + x1 x2 x5 + x0 x3 x5 + x1 x6 + x1 x5 x6 + x4 x5 x6 + 1
y3 = x0 + x1 + x0 x1 x2 + x0 x3 + x2 x4 + x1 x4 x5 + x2 x6 + x1 x3 x6 + x0 x4 x6 + x5 x6 + 1
y2 = x1 x2 + x0 x2 x3 + x4 + x1 x4 + x0 x1 x4 + x0 x5 + x0 x4 x5 + x3 x4 x5 + x1 x6 + x3 x6 + x0 x3 x6 + x4 x6 + x2 x4 x6
y1 = x0 x2 + x0 x4 + x3 x4 + x1 x5 + x2 x4 x5 + x6 + x0 x6 + x3 x6 + x2 x3 x6 + x1 x4 x6 + x0 x5 x6 + 1
y0 = x0 + x1 x3 + x0 x3 x4 + x1 x5 + x0 x2 x5 + x4 x5 + x0 x1 x6 + x2 x6 + x0 x5 x6 + x3 x5 x6 + 1
S7 : 7 cubic equations in 65 terms:
Table 5. Misty1: S7 and S9 .
Block Ciphers and Systems of Quadratic Equations 289
Turing: A Fast Stream Cipher Gregory G. Rose and Philip Hawkes Qualcomm Australia, Level 3, 230 Victoria Rd Gladesville, NSW 2111, Australia {ggr,phawkes}@qualcomm.com
Abstract. This paper proposes the Turing stream cipher. Turing offers up to 256-bit key strength, and is designed for extremely efficient software implementation.It combines an LFSR generator based on that of SOBER [21] with a keyed mixing function reminiscent of a block cipher round. Aspects of the block mixer round have been derived from Rijndael [6], Twofish [23], tc24 [24] and SAFER++ [17].
1
Introduction
Turing (named after Alan Turing) is a stream cipher designed to simultaneously be: – Extremely fast in software on commodity PCs, – Usable in very little RAM on embedded processors, and – Able to exploit parallelism to enable fast hardware implementation. The Turing stream cipher has a major component, the word-oriented Linear Feedback Shift Register (LFSR), which originated with the design of the SOBER family of ciphers [13, 14, 21]. Analyses of the SOBER family are found in [1–3, 11]. The efficient LFSR updating method is modelled after that of SNOW [9]. Turing combines the LFSR generator with a keyed mixing-function reminiscent of a block cipher round. The S-box used in this mixing round is partially derived from the SOBER-t32 S-box [14]. Further aspects of this mixing function have been derived from Rijndael [6], Twofish [23], tc24 [24] and SAFER++ [17]. Turing is designed to meet the needs of embedded applications that place severe constraints on the amount of processing power, program space and memory available for software encryption algorithms. Since most of the mobile telephones in use incorporate a microprocessor and memory, a software stream cipher that is fast and uses little memory would be ideal for this application. Turing overcomes the inefficiency of binary LFSRs in a manner similar to SOBER and SNOW, and a number of techniques to greatly increase the generation speed of the pseudo-random stream in software on a general processor. Turing allows an implementation tradeoff between small memory use, or very high speed using pre-computed tables. Reference source code showing small memory, key agile, and speed-optimized implementations is available at [22], along with a test harness with test vectors. The reference implementation (TuringRef.c) should be viewed as the definitive description of Turing. T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 290–306, 2003. c International Association for Cryptologic Research 2003
Turing: A Fast Stream Cipher
291
Turing has four components: key loading, Initialisation vector (IV) loading, an LFSR, and a keyed non-linear filter (NLF). The key loading initializes the keyed S-boxes, and the IV loading initializes the LFSR. The LFSR and NLF then generate key stream in 160-bit blocks (see Figure 1). Five 32-bit words selected from the LFSR are first mixed, then passed through a highly-nonlinear, keydependent S-box transformation, and mixed again. The resulting 5-word nonlinear block is combined with 5 new LFSR words to create 160 bits of keystream. The final addition of 5 LFSR words (called whitening) provides the output with good statistical properties, while the nonlinear block hides the linear properties of the LFSR. For each 160-bit block of key stream, the LFSR state is updated 5 times.
LFSR Register
Non-Linear Filter
Keyed 8x32 S-boxes
5-PHT Mixes between words A
B C D E
Mixes within words
TA TB TC TD TE
Whitening
Outputs
Add in new LFSR Words
5-PHT Mixes between words
XA
YA
XB
YB
XC
YC
XD
YD
XE
YE
+WA
ZA
+WB
ZB
+WC
ZC
+WD
ZD
+WE
ZE
Rotate Bytes
Fig. 1. Block diagram for Turing.
The paper is set out as follows. First, the LFSR is defined in Section 2. Section 3 describes the NLF and explains how the overall structure of Turing operates. The key and initialization vector loading is described in Section 4. Section 5 discusses performance, and Section 6 analyses security and possible attacks. Turing uses “big-endian” byte ordering, in which the most significant byte of a multi-byte quantity appears first in memory. For example, a 32-bit word A has bytes indexed as (A0 , A1 , A2 , A3 ) where A0 is the most significant byte.
292
2
Gregory G. Rose and Philip Hawkes
LFSR of Turing
Binary Linear Feedback Shift Registers can be extremely inefficient in software on general-purpose microprocessors. LFSRs can operate over any finite field, so an LFSR can be made more efficient in software by utilizing a finite field more suited to the processor. Particularly good choices for such a field are the Galois Field with 2w elements (GF (2w )), where w is related to the size of items in the underlying processor, usually bytes or 32-bit words. The elements of this field and the coefficients of the recurrence relation occupy exactly one unit of storage and can be efficiently manipulated in software. The standard representation of an element A in the field GF (2w ) is a wbit word with bits (aw−1 , aw−2 , . . . , a1 , a0 ), which represents the polynomial aw−1 z w−1 + . . . + a1 z + a0 . Elements can be added and multiplied: addition of elements in the field is equivalent to XOR. To multiply two elements of the field we multiply the corresponding polynomials modulo 2, and then reduce the resulting polynomial modulo a chosen irreducible polynomial of degree w. It is also possible to represent GF (2w ) using a subfield. For example, rather than representing elements of GF (2w ) as degree-31 polynomials over GF (2), Turing uses 8-bit bytes to represent elements of a subfield GF (28 ), and 32-bit words to represent degree-3 polynomials over GF (28 ). This is isomorphic to the standard representation, but not identical. The subfield B = GF (28 ) of bytes is represented in Turing modulo the irreducible polynomial z 8 + z 6 + z 3 + z 2 + 1. Bytes represent degree-7 polynomials over GF (2); the constant β0 = 0x67 below represents the polynomial z 6 + z 5 + z 2 + z + 1 for example. The Galois finite field W = B 4 = GF ((28 )4 ) of words can now be represented using degree3 polynomials where the coefficients are bytes (subfield elements of B). For example, the word 0xD02B4367 represents the polynomial 0xD0y 3 + 0x2By 2 + 0x43y + 0x67. The field W can be represented by an irreducible polynomial y 4 + β3 y 3 + β2 y 2 + β1 y + β0 . The specific coefficients βi used in Turing are best given after describing Turing’s LFSR. An LFSR of order k over the field GF (2w ) generates a stream of w-bit LFSR words {S[i]} using a register of k memory elements (R[0],R[1],...,R[k-1]). The register stores the values of k successive LFSR words so after i clocks the register stores the values of (S[i], S[i + 1], ..., S[i + k − 1]). At each clock, the LFSR computes the next LFSR word S[i + k] in the sequence using a GF (2w ) recurrence relation S[i + k] = α0 S[i] + α1 S[i + 1] + · · · + αk−1 S[i + k − 1] ,
(1)
and updates the register (here new contains the value of S[i + k]): R[0] = R[1]; R[1] = R[2]; ...; R[15] = R[16]; R[16] = new; The register now contains (S[(i + 1)], S[(i + 1) + 1], ..., S[(i + 1) + k − 1]). The linear recurrence (1) is commonly represented by the characteristic polynomial k−1 p(X) = X k − j=0 αj X j . In the case of Turing, the LFSR consists of k = 17 words of state information with w = 32-bit words. The LFSR was developed in three steps.
Turing: A Fast Stream Cipher
293
First, the characteristic polynomial of the Turing LFSR was chosen to be of the form p(X) = X 17 + X 15 + X 4 + α, over GF (232 ). The exponents {17, 15, 4, 0} were chosen because they provide good security; the use of exponents dates back to the design of SOBER-t16 and SOBER-t32 [13]. Next, the coefficient α = 0x00000100 ≡ 0x00 · y 3 + 0x00 · y 2 + 0x01 · y + 0x00 = y, was chosen because it allows an efficient software implementation: multiplication by α consists of shifting the word left by 8 bits, and adding (XOR) a pre-computed constant from a table indexed by the most significant 8 bits, as in SNOW. A portion of this table Multab for Turing is shown in Appendix A. In C code, the new word to be inserted in the LFSR is calculated: new = R[15] ˆ R[4] ˆ (R[0] > 24]; where ˆ is the XOR operation; > is the right shift operation. Finally, the irreducible polynomial representing the Galois field W was chosen to be y 4 + 0xD0 · y 3 + 0x2B · y 2 + 0x43 · y + 0x67, since it satisfies the following constraints: – The LFSR must have maximum length period. The period has a maximum length (2544 − 1) when the field representations make p(X) a primitive polynomial of degree 17 in the field W . – Half of the coefficients of bit-wise recurrence must be 1. The Turing LFSR is mathematically equivalent to 32 parallel bit-wide LFSRs over GF (2): each of length equivalent to the total state 17 × 32 = 544; each with the same recurrence relation; but different initial state [15]. Appendix D shows the polynomial p1 (x), corresponding to the binary recurrence for the Turing LFSR. Requiring half of the coefficients to be 1 is ideal for maximum diffusion and strength against cryptanalysis. The key stream is generated as follows (see Figure 1). First, the LFSR is clocked. Then the 5 values in R[16], R[13], R[6], R[1], R[0], are selected as the inputs (A, B, C, D, E) (respectively) to the nonlinear filter (NLF). The NLF produces the nonlinear block (Y A, Y B, Y C, Y D, Y E) from (A, B, C, D, E). The LFSR is clocked an additional three times, and the values in R[14], R[12], R[8], R[1], R[0] of this new state (referred to as W A, W B, W C, W D, W E) are selected for the whitening. These five words are added (modulo 232 ) to the corresponding nonlinear-block words to form a 160-bit key stream block (ZA, ZB, ZC, ZD, ZE). Finally, the LFSR is clocked once more before generating the next key stream block (a total of five clocks between producing outputs). The key stream is output in the order ZA, . . . , ZE; most significant byte of each word first. Issues of buffering bytes to encrypt data that is not aligned as multiples of 20 bytes are considered outside the scope of this document.
3
The Nonlinear Filter
The only component of Turing that is explicitly nonlinear is its S-boxes. Additional nonlinearity also comes from the combination of the operations of addition modulo 232 and XOR; while each of these operations is linear in its respective
294
Gregory G. Rose and Philip Hawkes
mathematical group, each is slightly nonlinear in the other’s group. As shown in Figure 1, the nonlinear filter in Turing consists of: – Selecting the 5 input words A, B, C, D, E; – Mixing the words using a 5-word Pseudo-Hadamard Transform (5-PHT), resulting in 5 new words T A, T B, T C, T D, T E. – Applying a 32 × 32 S-box construction to each of the words to form XA, XB, XC, XD, XE. Prior to applying the S-box construction, the words T B, T C and T D are rotated left by 8, 16 and 24 bits respectively, to address a potential attack described below. The S-box construction mixes the bytes within each word using four key-dependent, 8 → 32 nonlinear S-boxes. – Again mixing using the 5-PHT to form the words Y A, Y B, Y C, Y D, Y E of the nonlinear block. Note that the use of variables XA, XB and so forth is only to make the explanations simple. In practise, the same variable A would be overwritten for each of T A, XA, Y A, ZA, and similarly for B, C, D, E. 3.1
The “Pseudo-Hadamard Transform” (PHT)
In the cipher family of SAFER [16], Massey uses this very simple construct (called a Pseudo-Hadamard Transform) to mix the values of two bytes: (a, b) = (2a + b, a + b), where the addition operation is addition modulo 28 , the size of the bytes. The operation can further extended to mix an arbitrary number of words (often called a n-PHT). Such operations are used in the SAFER++ block cipher [17]), and the tc24 block cipher [24]. The Turing NLF uses addition modulo 232 to perform a 5-PHT: TA 21111 A TB 1 2 1 1 1 B TC = 1 1 2 1 1 · C . TD 1 1 1 2 1 D TE 11111 E Note that all diagonal entries are 2 except the last diagonal entry is 1, not 2. In C code, this is easily implemented and highly efficient: E = A + B + C + D + E; A = A + E; B = B + E; C = C + E; D = D + E; 3.2
The S-Box Construction
Turing S-box construction transforms each word using four logically independent 8 → 32 S-boxes S0 , S1 , S2 , S3 . These 8 → 32 S-boxes are applied to the corresponding bytes of the input word and XORed, in a manner similar to that used in Rijndael [6]. However, unlike Rijndael, this transformation is unlikely to be invertible, as the expansion from 8 bits to 32 bits is nonlinear. These four
Turing: A Fast Stream Cipher
295
8 → 32 S-boxes are based in turn on a fixed 8 → 8 bit permutation denoted Sbox and a fixed nonlinear 8 → 32 bit function denoted Qbox, iterated with the data modified by variables derived during key setup. The Sbox. The fixed 8 → 8 S-box is referred to in the rest of this document as Sbox[.]. It is a permutation of the input byte, has a minimum nonlinearity of 104, and is shown in Appendix B. The Sbox is derived by the following procedure, based on the well-known stream cipher RC4TM . RC4 was keyed with the 11-character ASCII string “Alan Turing”, and then 256 generated bytes were discarded. Then the current permutation used in RC4 was tested for nonlinearity, another byte generated, etc., until a total of 10000 bytes had been generated. The best observed minimum nonlinearity was 104, which first occurred after 736 bytes had been generated. The corresponding state table, that is, the internal permutation after keying and generating 736 bytes, forms Sbox. By happy coincidence, this permutation also has no fixed points (i.e. ∀x, Sbox[x] = x). The Qbox. The Qbox is a fixed nonlinear 8 → 32-bit table, shown in Appendix c. It was developed by the Queensland University of Technology at our request [8]. It is best viewed as 32 independent Boolean functions of the 8 input bits. The criteria for its development were: the functions should be highly nonlinear (each has nonlinearity of 114); the functions should be balanced (same number of zeroes and ones); and the functions should be pairwise uncorrelated. Computing the Keyed 8 → 32 S-boxes. Turing uses four keyed 8 → 32 S-boxes S0 , S1 , S2 , S3 . The original key is first transformed into the mixed key during key loading (see Section 4.1). The mixed key is accessed as bytes Ki [j]; the j index (0 ≤ j < N , where N is the number of words of the key) locates the word of the stored mixed key, while the i index (0 ≤ i ≤ 3) is the byte of the word, with the byte numbered 0 being the most significant byte. Each S-box Si (0 ≤ i ≤ 3) uses bytes from the corresponding byte positions of the scheduled key. The process is best presented in algorithmic form. The following code implements the entire S-box construction including the XOR of the four outputs of the individual S − boxes. The value w is the input word, and the integer r is the amount of rotation (recall that T B, T C, T D have their inputs rotated before being input to the S-box construction). static WORD S(WORD w, int r) { register int i; BYTE b[4]; WORD ws[4]; w = ROTL(w, r); /* cyclic rotate w to left by r bits*/ WORD2BYTE(w, b); /* divide w into bytes b[0]...b[3] */ ws[0] = ws[1] = ws[2] = ws[3] = 0; for (i = 0; i < keylen; ++i) { /* compute b[i]=t_i and ws[i]=w_i */ /* B(A,i) extracts the i-th byte of A */
296
Gregory G. Rose and Philip Hawkes
b[0] b[1] b[2] b[3] }
}
= = = =
Sbox[B(K[i],0) Sbox[B(K[i],1) Sbox[B(K[i],2) Sbox[B(K[i],3)
/* now w = (ws[0] w ˆ= (ws[1] w ˆ= (ws[2] w ˆ= (ws[3] return w;
ˆ ˆ ˆ ˆ
b[0]]; b[1]]; b[2]]; b[3]];
ws[0] ws[1] ws[2] ws[3]
ˆ= ˆ= ˆ= ˆ=
ROTL(Qbox[b[0]],i+0); ROTL(Qbox[b[1]],i+8); ROTL(Qbox[b[2]],i+16); ROTL(Qbox[b[3]],i+24);
xor the individual S-box outputs together */ & 0x00FFFFFFUL) | (b[0] x[1] = (k316); p_instance->x[3] = (k016); p_instance->x[5] = (k116); p_instance->x[7] = (k216); // Generate initial counter values p_instance->c[0] = _rotl(k2,16); p_instance->c[2] = _rotl(k3,16); p_instance->c[4] = _rotl(k0,16); p_instance->c[6] = _rotl(k1,16); p_instance->c[1] = (k0&0xFFFF0000) p_instance->c[3] = (k1&0xFFFF0000) p_instance->c[5] = (k2&0xFFFF0000) p_instance->c[7] = (k3&0xFFFF0000)
| | | |
(k1&0xFFFF); (k2&0xFFFF); (k3&0xFFFF); (k0&0xFFFF);
// Reset carry flag p_instance->carry = 0; // Iterate the system four times for (i=0;ix[i];
Rabbit: A New High-Performance Stream Cipher
325
// Encrypt or decrypt a block of data void cipher(t_instance *p_instance, const byte *p_src, byte *p_dest, size_t data_size) { uint32 i; for (i=0; ix[0] ˆ (p_instance->x[5]>>16) ˆ (p_instance->x[3]x[7]>>16) ˆ (p_instance->x[5]x[1]>>16) ˆ (p_instance->x[7]x[3]>>16) ˆ (p_instance->x[1] 2 − 1 ∨ Zi−1 = 0. Zi − 1 Therefore, Ci will run through the same set of numbers as Zi except that Ci will attain the value 2256 − 1 but not the value A. Thus, the period of the recurrence relation, C, is the same as for the linear congruential generator, Z. In particular, Ci = Cj if i − j mod Nc = 0.
Rabbit: A New High-Performance Stream Cipher
327
Internal State Period. For convenience, we write the next-state function in the following way −−−→ xi+1 = F (y i ) mod 232 , (29) where yi = (ci + xi ) mod 232 ,
(30)
such that xi is the internal state variable and ci is the counter state. According to a generalized version of lemma 4.1 in [8], y i will have at least the period of the counter system, Nc : −−−→ Proof. Given that y i = y j for i − j mod Nc = 0, then y i+1 = F (y i ) + ci and −−−→ y j+1 = F (y j ) + cj . Moreover, we have: ci = cj , therefore, y i+1 = y j+1 . Finally, if y i−1 = y j−1 this would imply that y i = y j which is a contradiction. Thus, also y i−1 = y j−1 However, a combination of the internal state, xi , is extracted as output. It is not evident that xi will have the same period as the counter system, but a lower bound for that period is obtained in the following. First, we note that there are relations between the counter period, Nc , the internal state period, Nx and the period of the y variables, Ny : Ny = aNx = bNc
(31)
where a and b are integers greater than zero with gcd(a, b) = 1. −−−→ Proof. Since xi+1 = F (yi ), we have Nx Ny . In particular, Nx divides Ny , because, if we assume that this is not the case, then there would exist an i −−−→ −−−−−−−−→ such that F (yi ) = xi+1 = xi+1+ Ny N = F (y i+ Ny N ) which contradicts the Ny Nx
x
Nx
x
periodicity. Thus, there exists an integer, a > 0, such that Ny = aNx . We also have that Nc divides Ny because if this was not the case then ci = ci+ Ny N . We Nc
c
just showed that xi = xi+Ny for all i, but y i = xi + ci = xi+ Ny N + ci+ Ny N = Nc
c
Nc
c
y i+Ny which again contradicts the Ny periodicity. Therefore, there exists an integer, b > 0 such that Ny = bNc and consequently, Ny = aNx = bNc b We have the relation: Nx = a Nc . Thus, we want to find an upper bound on the ratio, a/b. This can be done as follows. Define the degeneracy d to be the maximal number of pre-images xi+1 can have, i.e. d is the maximal number of different y i which give the same xi+1 and similarly, define dg to be the analogue for each g function. Then we can obtain the following rather conservative lower bound for the period: Let (x0 , x1 , x2 , ..., xNx −1 ) be a periodic sequence with period Nx , then the upper bound on a/b is the degeneracy d, i.e.: Nx where Nc is the counter period.
Nc , d
(32)
328
Martin Boesgaard et al.
Nc Proof. We want to show that k ≡ ab = N d. The periodicity gives: xi = x xi+Nx = xi+2Nx = ... = xi+(k−1)Nx . On the other hand, the corresponding counter values are non-equal: ci = ci+Nx = ci+2Nx = ... = ci+(k−1)Nx . Therefore, it follows: xi + ci = xi+Nx + ci+Nx = xi+2Nx + ci+2Nx = ... = xi+(k−1)Nx + ci+(k−1)Nx or equivalently: y i = y i+Nx = y i+2Nx = ... = y i+(k−1)Nx . Because of −−−→ −−−−−−→ −−−−−−−→ −−−−−−−−−−→ the periodicity we have F (y i ) = F (y i+Nx ) = F (y i+2Nx ) = ... = F (y i+(k−1)Nx ). Nc d Since each xi+1 maximally can have d pre-images, we see that k = ab = N x
To illustrate that the period length is sufficiently large, consider the equation sys−−−−→ tem, xi+1 = FI (xi ) arising by replacing all the g-functions by identity functions, but keeping the rotations. Fixing any two of the 32-bit input variables, the resulting equation system has a unique output for the remaining six input variables. −−−→ Therefore, FI (x) is maximally 264 -to-one. This bound can be combined with the measured degeneracy for the g-function, dg = 18, to obtain d < 264 · 188 < 298 which shows that the period length of the state variables is sufficiently large, i.e. Nx (2256 − 1)/d > 2158 . → − This bound is, of course, highly underestimated. For instance, the FI map will probably have degeneracy close to one. Furthermore, all points in the periodic solution should have the maximal degeneracy, d, and they should appear in exact synchronization with the counter. So if the output of F is not correlated strongly with the counter sequence, the probability for actually realizing this lower bound is vanishing. Furthermore, for the specific g-function only one point have a maximal degeneracy of 18 and about half of the points have degeneracy one. It also follows from above that if a point with a degeneracy one belongs to the periodic solution then the period cannot be shorter than the counter period. Bit-Flip Probabilities. Below we calculate the bit-flip probabilities for the counter bits. Let the bit-wise carry Φ[j 1] from bit position j to bit position j 1 be defined as: 1 if C [j] + A[j] + Φ[j] ≥ 2 [j 1] Φ (33) = 0 otherwise where x y ≡ x + y mod 256 and C and A are defined above. The value of C [j] only changes when either Φ[j] = 1 and A[j] = 0 or Φ[j] = 0 and A[j] = 1. The probability of the carry can be found by solving a system of recursive equations for carry probability as is shown in the following. The probability for carry from bit position j is given by: A[j] + P Φ[j 1] = 1 (34) P Φ[j] = 1 = 2 where x y ≡ x − y mod 256. Inserting the same expression for P Φ[j 1] = 1 into this equation we obtain A[j] [j] A[j 1] + P Φ[j 2] = 1 . (35) P Φ =1 = 1 + 2 22
Rabbit: A New High-Performance Stream Cipher
329
Continuing like this we get [j] A[j 1] A[j] + P Φ[j] ) = 1 A[j 2] A[j 255] P Φ =1 = + + ... + + 21 22 2255 2256
(36)
which can be rearranged into (2256 − 1)P Φ[j] = 1 = 2255 A[j 1] + 2254 A[j 2] + . . . + 21 A[j 255] + 20 A[j] . (37) This can equivalently be written as A≫j P Φ[j] = 1 = 256 . 2 −1 Inserting this expression into: P Φ[j] = 0 = 1 − P Φ[j] = 1 if A[j] = 1 [j] P Φ = A[j] = PΦ[j] = 1 if A[j] = 0 = A[j] − P Φ[j] = 1
(38)
(39)
leads to the following equation describing the probability for a bit-flip at position j. [j] [j] A ≫ j [j] . (40) = A − 256 P Φ = A 2 − 1 The probabilities will be unique for each bit position, as A is formed by repeating the 6-bit block 110100, which fits unevenly into a 256-bit integer. Consequently, A ≫ i = A for all i mod 256 = 0, thereby making P Φ[j] = A[j] unique for each j.
Helix: Fast Encryption and Authentication in a Single Cryptographic Primitive Niels Ferguson1 , Doug Whiting2 , Bruce Schneier3 , John Kelsey4 , Stefan Lucks5 , and Tadayoshi Kohno6 1
MacFergus
[email protected] 2 HiFn
[email protected] 3 Counterpane Internet Security
[email protected] 4
[email protected] 5 Universit¨ at Mannheim
[email protected] 6 UCSD
[email protected] Abstract. Helix is a high-speed stream cipher with a built-in MAC functionality. On a Pentium II CPU it is about twice as fast as Rijndael or Twofish, and comparable in speed to RC4. The overhead per encrypted/authenticated message is low, making it suitable for small messages. It is efficient in both hardware and software, and with some pre-computation can effectively switch keys on a per-message basis without additional overhead. Keywords: Stream cipher, MAC, authentication, encryption.
1
Introduction
Securing data in transmission is the most common real-life cryptographic problem. Basic security services require both encryption and authentication. This is (almost) always done using a symmetric cipher—public-key systems are only used to set up symmetric keys—and a Message Authentication Code (MAC). The AES process provided a number of very good block cipher designs, as well as a new block cipher standard. The cryptographic community learned a lot during the selection process about the engineering criteria for a good cipher. AES candidates were compared in performance and cost in many different implementation settings. We learned more about the importance of fast re-keying and tiny-memory implementations, the cost of S-boxes and circuit-depth for hardware implementations, the slowness of multiplication on some platforms, and other performance considerations. The community also learned about the difference of cryptanalysis in theory versus cryptanalysis in practice. Many block cipher modes restrict the types of T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 330–346, 2003. c Springer-Verlag Berlin Heidelberg 2003
Helix: Fast Encryption and Authentication
331
attack that can be performed on the underlying block cipher. Yet the generally accepted attack model for block ciphers is very liberal. Any method that distinguishes the block cipher from a random permutation is considered an attack. Each block cipher operation must protect against all types of attack. The resulting over-engineering leads to inefficiencies. Computer network properties like synchronization and error correction have eliminated the traditional synchronization problems of stream-cipher modes like OFB. Furthermore, stream ciphers have different implementation properties that restrict the cryptanalyst. They only receive their inputs once (a key and a nonce) and then produce a long stream of pseudo-random data. A stream cipher can start with a strong cryptographic operation to thoroughly mix the key and nonce into a state, and then use that state and a simpler mixing operation to produce the key stream. If the attacker tries to manipulate the inputs to the cipher he encounters the strong cryptographic operation. Alternatively he can analyse the key stream, but this is a static analysis only. As far as we know, static attacks are much less powerful than dynamic attacks. As there are fewer cryptographic requirements to fulfill, we believe that the key stream generation function can be made significantly faster, per message byte, than a block cipher can be. Given the suitability of steam ciphers for many practical tasks and the potential for faster implementations, we believe that stream ciphers are a fruitful area of research. Additionally, a stream cipher is often implemented—and from a cryptographic point of view, should always be implemented—together with a MAC. Encryption and authentication go hand in hand, and significant vulnerabilities can result if encryption is implemented without authentication. Outside the cryptographic literature, not using a proper MAC is one of the commonly encountered errors in stream cipher systems. A stream cipher with built-in MAC is much more likely to be used correctly, because it provides a MAC without the associated performance penalties. Helix is an attempt to combine all these lessons.
2
An Overview of Helix
Helix is a combined stream cipher and MAC function, and directly provides the authenticated encryption functionality. By incorporating the plaintext into the stream cipher state Helix can provide the authentication functionality without extra costs [Gol00]. Helix’s design strength is 128 bits, which means that we expect that no attack on the cipher exists that requires fewer than 2128 Helix block function evaluations to be carried out. Helix can process data in less than 7 clock cycles per byte on a Pentium II CPU, more than twice as fast as AES. Helix uses a 256-bit key and a 128-bit nonce. The key is secret, and the nonce is typically public knowledge. Helix is optimised for 32-bit platforms; all operations are on 32-bit words. The only operations used are addition modulo 232 , exclusive or, and rotation by fixed numbers of bits. The design philosophy of Helix can be summarized as “many simple rounds.”
332
Niels Ferguson et al.
Helix has a state that consists of 5 words of 32 bits each. (This is the maximum state that can fit in the registers of the current Intel CPUs.) A single round of Helix consists of adding (or xoring) one state word into the next, and rotating the first word. This is shown in Figure 1 where the state words are shown as vertical lines.
Fig. 1. A single round of Helix.
Multiple rounds are applied in a cyclical pattern to the state. The horizontal lines of the rounds wind themselves in helical fashion through the five state words. Twenty rounds make up one block (see Figure 2). Helix actually uses two intertwined helices; a single block contains two full turns of each of the helices. During each block several other activities occur. During block i one word of key stream is generated (Si ), two words of key material are added (Xi,0 and Xi,1 ), and one word of plaintext is added (Pi ). The output state of one block is used as input to the next, so the computations shown in figure 2 are all that is required to process 4 bytes of the message. As with any stream cipher, the ciphertext is created by xoring the plaintext with the key stream (not shown in the figure). At the start of an encryption a starting state is derived from the key and nonce. The key words Xi,j depend on the key, the length of the input key, the nonce, and the block number i. State guessing attacks are made more difficult by adding key material at double the rate at which key stream material is extracted. At the end of the message some extra processing is done, after which a 128-bit MAC tag is produced to authenticate the message.
3
Definition of Helix
The Helix encryption function takes as input a variable length key U of up to 32 bytes, a 16-byte nonce N , and a plaintext P . It produces a ciphertext message and a tag that provides authentication. The decryption function takes the key, nonce, ciphertext, and tag, and produces either the plaintext message or an error if the authentication failed. 3.1
Preliminaries
Helix operates on 32-bit words while the inputs and outputs are a sequences of bytes. In all situations Helix uses the least-significant-byte first convention. A sequence of bytes xi is identified with a sequence of words Xj by the relations
334
Niels Ferguson et al.
Xj :=
3
x(4j+k) · 2
8k
k=0
xi :=
Xi/4
28(i mod 4)
mod 28
These two equations are complimentary and show the conversion both ways. Let (x) denote the length of a string of bytes x. The input key U consists of a sequence of bytes u0 , u1 , . . . , u(U )−1 with 0 ≤ (U ) ≤ 32. The key is processed through the key mixing function, defined in section 3.7, to produce the working key which consists of 8 words K0 , . . . , K7 . The nonce N consists of 16 bytes, interpreted as 4 words N0 , . . . , N3 . The plaintext P and ciphertext C are both sequences of bytes of the same length, with the restriction that 0 ≤ (P ) < 264 . Both are manipulated as a sequence of words, Pi and Ci respectively. The last word of the plaintext and ciphertext might be only partially used. The ‘extra’ plaintext bytes in the last word are taken to be zero. The ‘extra’ ciphertext bytes are irrelevant and never used. Note that the cipher is specified for zero-length plaintexts; in this case, only a MAC is generated. 3.2
A Block
Helix consists of a sequence of blocks. The blocks are numbered sequentially which assigns each block a unique number i. At the start of block i the state (i) (i) consists of 5 words: Z0 , . . . , Z4 ; at the end of the block the state consists of (i+1) (i+1) Z0 , . . . , Z4 which form the input to the next block with number i + 1. Block i also uses as input two key words Xi,0 and Xi,1 , and the plaintext word (i) Pi . It produces one word of key stream Si := Z0 ; the ciphertext words are defined by Ci := Pi ⊕ Si . Instead of repeating the block definition in formulas, we define the block function using figure 2. All values are 32-bit words, exclusive or is denoted by ⊕, addition modulo 232 is denoted by , and rotation by ≪. In the remainder of this paper, the terms “block,” and “block function” are used interchangeably. 3.3
Key Words for Each Block
The expanded key words are derived from the working key K0 , . . . , K7 , the nonce N0 , . . . , N3 , the input key length (U ), and the block number i. We first extend the nonce to 8 words by defining Nk := (k mod 4) − Nk−4 (mod 232 ) for k = 4, . . . , 7. The key words for block i are then defined by Xi,0 := Ki mod 8 Xi,1 := K(i+4) mod 8 + Ni mod 8 + Xi + i + 8 31 (i + 8)/2 if i mod 4 = 3 Xi := 4 · (U ) if i mod 4 = 1 0 otherwise
Helix: Fast Encryption and Authentication
335
where all additions are taken modulo 232 . Note that Xi encodes bits 31 to 62 of the value i + 8; this is not the same as the upper 32 bits of i + 8. 3.4
Initialisation
A Helix encryption is started by setting (−8)
Zi
= Ki+3 ⊕ Ni
(−8) Z4
= K7
for i = 0, . . . , 3
Eight blocks are then applied, using block number -8 to -1. For these block the plaintext word Pi is defined to be zero, and the generated key stream words are discarded. 3.5
Encryption
After the initialisation the plaintext is encrypted. Let k := ((P ) + 3)/4 be the number of words in the plaintext. The encryption consists of k blocks numbered 0 to k −1. Each block generates one word of key stream, which is used to encrypt one word of the plaintext. Depending on (P ) mod 4, between 1 and 4 of the bytes of the last key stream word are used. 3.6
Computing the MAC
Just after the block that encrypted the last plaintext byte, one of the state words (k) is modified. The state word Z0 is xored with the value 0x912d94f11 . Using this modified state, eight blocks, numbered k, . . . , k + 7 are applied for post-mixing. For these blocks the generated key stream is discarded and the plaintext word Pi is defined as (P ) mod 4. After the post-mixing, four more blocks, numbered k +8, . . . , k +11, are applied. The key stream generated by these four blocks form the tag. The plaintext input remains the same as in the previous eight blocks. 3.7
Key Mixing
The key mixing converts a variable-length input key U to the fixed-length working key, K. First, the Helix block function is used to create a round function F that maps 128 bits to 128 bits. The four input words to F are extended with a single word with value (U ) + 64 to form a 5-word state. The block function is then applied with zero key inputs and zero plaintext input. The first four state words of the resulting state form the result of F . 1
This constant is constructed by taking the 6 least significant bits of each of the ASCII characters of the string “Helix”, and putting a single one bit both before and after it.
336
Niels Ferguson et al.
The input key U is first extended with 32 − (U ) zero bytes. The 32 key bytes are converted to 8 words K32 , . . . , K39 . Further key words are defined by the equation (K4i , . . . , K4i+3 ) := F ((K4i+4 , . . . , K4i+7 )) ⊕ (K4i+8 , . . . , K4i+11 ) for i = 7, . . . , 0. The words K0 , . . . , K7 form the working key of the cipher. (This recursion defines a Feistel-type cipher on 256-bit blocks.) 3.8
Decryption
Decryption is almost identical to encryption. The only differences are: – The key stream generated at the start of each block is used to decrypt the ciphertext and produce the plaintext word that is required half a block later. Care has to be taken with the last plaintext word to ensure that unused plaintext bytes are taken to be zero and not filled with the extra key stream bytes. – Once the tag has been generated it is compared to the tag provided. If the two values are not identical, all generated data (i.e. the key stream, plaintext, and tag) is destroyed.
4
Implementation
Compared to other ciphers Helix is relatively easy to implement in software. If 32bit addition, exclusive or, and rotation functions are available, all the functions are easily implemented. Helix is also fast. A single round takes only a single clock cycle to compute on a Pentium II CPU, because the super-scalar architecture can perform an addition or xor simultaneously with a 32-bit rotation. A block of Helix takes 20 cycles plus some overhead for the handling of the plaintext, key stream, and ciphertext. Our un-optimised assembly implementation requires less than 7 clock cycles per byte. This compares to about 16 clock cycles per byte for the best AES implementation on the same platform2 . Most implementation flexibility is in the way the key schedule is computed. The key mixing only needs to be done once for each key value. The recurrence relation used in the key mixing implements a Feistel cipher, so the key mixing can be done in-place. The Xi,1 key words can mostly pre-computed with only the block number being added every block. Implementations that limit the plaintext size to 232 bytes can ignore the upper bits of the block number in the definition of Xi because these bits will always be zero. 2
This is a somewhat unfair comparison. The AES implementation does not actually read the data from memory, encrypt it, and write it back, which would slow it down further. What is more, most block cipher modes only provide encryption or authentication so two passes over the message are required. The alternative is to use one of the new authenticated encryption modes, such as [Jut01], but they are all patented and require a license.
Helix: Fast Encryption and Authentication
337
Helix is also fast in hardware. The rotations cost no time, although they do consume routing resources in chip layouts. The critical path through the block function consists of 6 additions and 5 xors. As the critical path contains no rotations, a certain amount of ripple of the adders can be overlapped, with the lower bits being produced and used before the upper bits are available. A more detailed analysis of this overlapping is required for any high-speed implementation. A conservative estimate for a relatively low-cost ASIC layout is 2.5 ns per 32-bit adder and 0.5 ns per xor , which adds up to 17.5 ns/block. This translates to more than 200 MByte per second, or just under 2 Gbit per second.
5
Use
One of the dangers of a steam cipher is that the key-stream will be re-used. To avoid this problem Helix imposes a few restrictions on the sender and receiver: – The sender must ensure that each (K,N ) pair is used at most once to encrypt a message. A single sender must use a new, unique, nonce for each message. Multiple senders that want to use the same key have to ensure that they never choose the same nonce, for example by dividing the nonce space between them. If two different messages are ever encrypted with the same (K,N ) pair, Helix loses its security properties. – The receiver may not release the plaintext P , or the key stream, until it has verified the tag successfully. In most situations this requires the receiver to buffer the entire plaintext before it is released. These requirements seem restrictive, but they are in fact implicitly required by all stream ciphers (e.g. RC4) and many block cipher modes (e.g. OCB [RBBK01b,RBBK01a] and CCM [WHF]) Although Helix allows the use of short keys, we strongly recommend the use of keys of at least 128 bits, preferably 256 bits.
6
Other Modes of Use
So far we have described Helix as providing both encryption and authentication. Helix can be used in other modes as well. For any particular key Helix should be used in only one of these modes. Using several modes with a single key can lead to a loss of security. 6.1
Unencrypted Headers
In packet environments it is often desirable to authenticate the packet header without encrypting it. From the encryption/authentication layer this looks like an additional string of data that is to be authenticated but not encrypted. We define a standard method of handling such additional data without modifying the basic Helix computations. First a length field is formed which is eight bytes long and encodes the length of the additional data in least-significant-byte first format. The additional data
338
Niels Ferguson et al.
is padded with 0–3 zero bytes until the length is a multiple of four. The concatenation of the length field, the padded additional data, and the message data are then processed as a normal message through Helix. The ciphertext bytes corresponding to the length field and the padded additional data are discarded, leaving only the ciphertext of the message data and the tag. 6.2
Pure Stream Cipher & PRNG
Helix can be use as a pure stream cipher by ignoring the MAC computations at the end. And like any stream cipher, Helix is a cryptographically strong pseudorandom number generator. For every (key,nonce) input it produces a stream of pseudo-random data. This makes Helix suitable for use as a PRNG. 6.3
MAC with Nonce
Helix can also be used a pure MAC function. The data to be authenticated is encrypted, but the ciphertext is discarded. The receiver similarly discards the key stream and just feeds the plaintext to the Helix rounds. In this mode Helix is significantly faster than, for example, HMAC-SHA1, but it does require a unique nonce for each message. Unfortunately, it is insecure to use Helix with a fixed nonce value, due to collisions on the 160-bit state.
7
Design Rationale
Although the design strength of Helix is 128 bits, we use 256-bit keys. This avoids a very general class of attacks that exploits collisions on the key value. For flexibility Helix also allows shorter keys to be used, as there are many practical situations in which fewer than 256 bits of key material are available. The small set of elementary operations that Helix uses makes it efficient on a large number of platforms. The absence of tables makes Helix efficient in hardware as well. Most ciphers use lookup tables to provide the necessary nonlinearity. In Helix the nonlinearity comes from the mixing of xors with additions. Neither of these operations can be approximated well within the group of the other. There are some good approximations, but on average the approximations are quite bad [LM01]. The diffusion in Helix is not terribly fast, but it is unstoppable. As the attacker has very little control over the state, it is not possible to limit the diffusion of differences. In those areas where dynamic attacks are possible we use a sequence of 8 blocks to ensure thorough mixing of the state words. The key mixing is an un-keyed bijective function. The purpose is to spread the available entropy over all key words. If, for example, the key is provided by a SHA-1 computation then only 5 words of key material are provided. The key mixing ensures that all 8 key words depend on the key material. Using a bijective mixing function ensures that no two 256-bit input keys lead to the same working
Helix: Fast Encryption and Authentication
339
key values. The use of the input key length in X guarantees that even keys that lead to the same working key (each short key leads to a working key that is also produced by a 256-bit key) do not lead to equivalent Helix encryptions. 7.1
Key Schedule
The Xi,0 values simply cycle through the key words. The Xi,1 values depend on the same key words in anti-phase, the extended nonce words, the block number, and the input key length. This key schedule has a number of properties. All 8 key words and and all 4 nonce words affect the state every 4 blocks. The key schedule also ensures that different (K, N ) pairs produce different block key sequences. Even stronger: no sequence of 17 key words ever occurs twice across all keys, all nonce values, and all positions in the encryption computation. To demonstrate this we look at the sequence Yj := Xj/2,j mod 2 . This is the sequence of key words in the order they are used. Given just part of the sequence Yj , without the proper index values j, we can recover the key, nonce, and block number. (When the plaintext word is zero the first half of the block function is identical to the second half of the block function, so it makes sense to look at the sequence Yj and allow half-block offsets.) If Yj = Yj+16 then j is even, otherwise j is odd. This allows us to split the Y values back into an Xi,0 and Xi,1 sequence. Now consider Ri := Xi,1 − Xi,0 + Xi+4,1 − Xi+4,0 = Ni mod 8 + N(i+4) mod 8 + Xi + Xi+4 + 2i + 20 = (i mod 4) + 2i + 20 + Xi + Xi+4
all modulo 232 . We first look at Ri mod 4. The X terms can only have a nonzero contribution if i mod 4 = 3, so 3 out of 4 consecutive times we get just ((i mod 4) + 2i) mod 4 = 3i mod 4, which gives us i mod 4. Looking at the full Ri for an i with i mod 4 = 0 gives us i mod 231 . The sum Xi + Xi+4 from the case 3 i mod 4 = 3 gives us the upper bits of i. This recovers the block number, i. Given i mod 8 we can recover the working key from the Xi,0 ’s. Knowledge of i and the key words allows us to compute the key length and the nonce from the Xi,1 ’s, as well as check the redundancy introduced by the nonce expansion to 8 words. We have not investigated whether it is possible to recover the key, nonce, and block number from fewer than 17 consecutive key words. A simple counting argument shows that at least 14 are required. This remains an open problem. 7.2
Choice of Rotation Counts
The strength of Helix is depends on the rotation counts chosen for the Helix block function. The rotations provide the diffusion between the various bit positions 3
This isn’t absolutely perfect. We don’t recover the 62’nd bit of i + 8, but this bit will only be set during the very last few blocks of a message very close to 264 bytes long. This does not lead to a weakness.
340
Niels Ferguson et al.
in the state words. During the design process we examined the impact of various choices of rotation counts both in terms of attempts to cryptanalyze the cipher, and also in terms of their impact on statistical tests of the block function. To analyse the diffusion properties of a set of rotation counts, consider a variant of the block function with all the additions are changed to xors. (This is equivalent to ignoring the carries in the additions.) In this variant we can track which output bits are affected by which input bits. For this analysis we consider an output bit affected if its computational path has a dependency on the input bit at any one point, even if the output bit in our linearised block function is not changed due to several dependencies canceling out. This seems to be the most suitable way to analyse diffusion and is related to the independence assumption in differential and linear cryptanalysis. A set of rotation counts can, at best, ensure that changing a single state input bit affects at least 21 bits of the output. There are a large number (over 6 000) of such rotation count sets. We discarded all rotation count sets that contained a rotation count of 0, 1, 8, 16, 24, or 31. Rotation by a multiple of 8 has a relatively low order, and rotation by 1 or 31 bit positions provides diffusion between adjacent bits, something the carry bits already do. This reduced the set of candidate rotation counts to 86. Using the full block function we ran statistical tests on many candidate rotation count sets to see how these values would affect the ability of the block function to diffuse changes and mix together separate information within the 160-bit internal state. Among our tests, we considered: 1. The number of rounds required before all output bits passed binomial tests given a fixed input difference in the state. 2. The number of rounds required before the output states’ Hamming weight distribution passed a χ2 test given low- and high-Hamming weight input states. 3. The number of round required before the output states’ differences Hamming weight distribution passed a χ2 test given low- and high-Hamming weight differences in the input state [KRRR98]. 4. Low- and high-Hamming weight higher-order differences, and the number of rounds required before the resulting output differences’ Hamming weights passed a χ2 test. The surprising result was that most rotation counts did pretty well. Our carefully-selected rotation count sets were slightly better than random ones, but only by a small margin. Degenerate rotation counts (all rotation counts equal, or most rotation counts zero) led to much worse test results. At the end of our analysis, we selected more or less at random from the remaining candidates. Based on our limited analysis, the specific choice of rotation counts does not have a strong impact on the security of Helix, with only the caveat that we had to avoid some obvious degenerate cases.
Helix: Fast Encryption and Authentication
8
341
Conclusions and Intellectual Property Statement
Most applications that require symmetric cryptography actually require both encryption and authentication. We believe that the most efficient way to achieve this combined goal is to design cryptographic primitives specifically for the task. Towards this end, we present such a new cryptographic primitive, called Helix. We hope that Helix and this paper will spur additional research in authenticated encryption stream ciphers. As with any experimental design, we remark that Helix should not be used until it has received additional cryptanalysis. Finally, we hereby explicitly release any intellectual property rights to Helix into the public domain. Furthermore, we are not aware of any patent or patent application anywhere in the world that cover Helix.
Acknowledgements We would like to thank David Wagner, Rich Schroeppel, and the anonymous referees for their helpful comments and encouragements. Felix Schleer helped us by creating one of the reference implementations.
References [Arm02] Frederik Armknecht. A linearization attack on the Bluetooth key stream generator. Cryptology ePrint Archive, Report 2002/191, 2002. http://eprint. iacr.org/2002/191. [Cou02] Nicolas Courtois. Higher order correlation attacks, XL algorithm, and cryptanalysis of Toyocrypt. In Information Security and Cryptology–Icisc 2002, volume 2587 of Lecture Notes in Computer Science. Springer-Verlag, 2002. To appear. [CP02] Nicolas Courtois and Josef Pieprzyk. Cryptanalysis of block ciphers with overdefined systems of equations. In Yuliang Zheng, editor, Advances in Cryptology—ASIACRYPT2002, volume 2501 of Lecture Notes in Computer Science, pages 267–287. Springer-Verlag, 2002. [DGV93] Joan Daemen, Ren´e Govaerts, and Joos Vandewalle. Resynchronisation weaknesses in synchronous stream ciphers. In Tor Helleseth, editor, Advances in Cryptology—EUROCRYPT ’93, volume 765 of Lecture Notes in Computer Science, pages 159–167. Springer-Verlag, 1993. [Gol00] Jovan Dj. Goli´c. Modes of operation of stream ciphers. In Douglas R. Stinson and Stafford Tavares, editors, Selected Areas in Cryptography, 7th Annual International Workshop, SAC 2000, volume 2012 of Lecture Notes in Computer Science, pages 233–247. Springer-Verlag, 2000. [Jut01] Charanjit S. Jutla. Encryption modes with almost free message integrity. In Birgit Pfitzmann, editor, Advances in Cryptology—EUROCRYPT2001, volume 2045 of Lecture Notes in Computer Science, pages 529–544, 2001. [KRRR98] Lars R. Knudsen, Vincent Rijmen, Ronald L. Rivest, and M.J.B. Robshaw. On the design and security of RC2. In Serge Vaudenay, editor, Fast Software Encryption, 5th International Workshop, FSE’98, volume 1372 of Lecture Notes in Computer Science, pages 206–221. Springer-Verlag, 1998.
342
Niels Ferguson et al.
[LM01] Helger Lipmaa and Shiho Moriai. Efficient algorithms for computing differential properties of addition. In Mitsuru Matsui, editor, Fast Software Encryption2001, Lecture Notes in Computer Science. Springer-Verlag, To appear, 2001. Available from http://www.tcs.hut.fi/˜helger/papers/lm01/. [RBBK01a] Philip Rogaway, Mihir Bellare, John Black, and Ted Krovetz. OCB: A block-cipher mode of operation for efficient authenticated encryption, September 2001. Available from http://www.cs.ucdavis.edu/˜rogaway. [RBBK01b] Phillip Rogaway, Mihir Bellare, John Black, and Ted Krovetz. OCB: A block-cipher mode of operation for efficient authenticated encryption. In Eighth ACM Conference on Computer and Communications Security (CCS-8), pages 196– 205. ACM Press, 2001. [WHF] Doug Whiting, Russ Housley, and Niels Ferguson. Counter with CBC-MAC (CCM). Available from csrc.nist.gov/encryption/modes/proposedmodes/ccm/ ccm.pdf.
A
Test vectors
The authors will maintain a web site at http://www.macfergus.com/helix with news, example code, and test vectors. We give some simple test vectors here. (The 8-word working key is given as a sequence of 32 bytes, least significant byte first.) Initial Key: <empty string> Nonce: 00 00 00 00 00 Working Key: a9 3b 6e 32 bc e3 da 57 7d ef Plaintext: 00 00 00 00 00 Ciphertext: 70 44 c9 be 48 MAC: 65 be 7a 60 fd
00 23 7c 00 ae 3b
00 4f 1b 00 89 8a
00 6c 64 00 22 5e
00 32 af 00 66 31
00 6c 78 00 e4 61
00 00 00 00 00 00 0f 82 74 ff a2 41 7c 38 dc ef e3 de
Initial Key: 00 04 Nonce: 00 Working Key: 6e 04 Plaintext: 00 04 Ciphertext: 7a 0c MAC: e4
80 80 56 32 d8 10
00 00 00 e9 f8 00 00 72 74 e5
00 00 00 a7 4a 00 00 a7 46 49
00 00 00 6c d6 00 00 5b a3 01
01 05 01 bd 83 01 05 62 bf c5
00 00 00 0b 12 00 00 50 3f 0b
00 00 00 f6 f9 00 00 38 99 34
00 00 00 20 06 00 00 0b e6 e7
02 06 02 a6 ed 02 06 69 65 80
00 00 00 d9 d1 00 00 75 56 c0
00 00 00 b7 a6 00 00 1c b9 9c
00 00 00 59 98 00 00 d1 c1 39
03 07 03 49 9e 03 07 28 18 b1
00 00 00 d3 c8 00 00 30 ca 09
00 00 00 39 9d 00 00 8d 7d a1
00 00 00 95 45 00 00 9a 87 17
Initial Key: 48 65 Nonce: 30 31 Working Key: 6c 1e f4 03 Plaintext: 48 65 Ciphertext: 6c 4c MAC: 6c 82
6c 32 d7 28 6c 27 d1
69 33 7a 4a 6c b9 aa
78 34 cb 73 6f 7a 3b
35 a3 9b 2c 82 90
36 a1 b6 20 a0 5f
37 d2 9f 77 c5 12
38 8f 35 6f 80 f1
39 1c 7a 72 2c 44
61 d6 85 6c 23 3f
62 20 f5 64 f2 a7
63 6d 51 21 0d f6
64 65 66 f1 15 da 32 11 39 a1 01 d2
Helix: Fast Encryption and Authentication
B
343
Cryptanalysis
Helix is intended to provide everything needed for an encrypted and authenticated communications session. A successful attack on Helix will have occurred when an attacker can either predict a keysteam bit he hasn’t seen with a probability slightly higher than 50%, or when he can create a forged or altered message that is accepted by the recipient with a probability substantially higher than 2−128 . To be meaningful given the 128-bit security bound of Helix, any such attack must require fewer than 2128 block function evaluations for all participants combined. Also, any such attack must obey the security requirements placed on Helix’ operations, e.g., no reuse of nonces, MACs checked before decrypted messages released, etc. In this section, we consider a number of possible ways to attack Helix. Although our time and resources have been limited, we have not yet discovered any workable method of attacking Helix. B.1
Static Analysis
A static analysis just takes the key stream and tries to reconstruct the state and key. Several properties make this type of attack difficult. Even if the whole state is known, any four consecutive key stream words are fully random. This is because each Xi,1 key value affects Si+1 in a bijective manner, so for any given state and any sequence of Xi,0 words there is a bijective mapping from K(i+4) mod 8 , . . . , K(i+7) mod 8 to Si+1 , . . . , Si+4 . A similar argument applies when the block function is computed backwards. Any attempt to recover the key, even if the state is known at a single point, must therefore span at least 4 blocks and 5 key stream words. Of course, there is no reasonable way of finding the state. At the beginning of each block there is 128 bits of unknown state. (The 32 bits of the key stream word are known to the attacker.) As the design strength is 128 bits, an attacker cannot afford to guess the entire state. A partially guessed state does not help much as key material is added at twice the rate that key stream is produced. B.2
Period Length
Helix’ internal state is updated continuously by the plaintext it is encrypting. So long as the plaintext is not repeating, the keystream should have an arbitrarily long period. With a fixed or repeating plaintext, the Helix state does not cycle either. In section 7.1 we showed that any 17 consecutive key words used as inputs to the block function are unique. The nonrepeating key word values prevent the state from ever falling into a cycle. B.3
State Collisions
The 160-bit state of Helix can be expected to collide for some (key,nonce) pairs. However, this doesn’t lead to a weakness, because the state collision is guaranteed
344
Niels Ferguson et al.
not to survive long enough to yield an attack, or even allow reliable detection by the attacker. To detect a collision on 160 bit values requires 160-bits of information about each state. But in the four block computations required to generate 160 bits of key stream the whole key, nonce, and block number get added to the state. Starting at the same state these inputs will introduce a difference in the key stream, and make it impossible to detect the state collision4 . B.4
Weak Keys
Helix makes constant use of the words of the working key. An all-zero working key intuitively seems like a bad thing (it effectively omits a few operations from the block function), but we have not discovered any possible attack based on it. The all-zero working key is only generated by a single key of 32 bytes length. Shorter key length cannot generate the all-zero working key. The all-zero working key does not seem to have any practical security relevance, and there is no reason to treat this key differently from any other key. B.5
Adaptive Chosen Plaintext Attacks
Because the plaintext affects the state, Helix allows an attack model that traditional stream ciphers prevent: An attacker can request the encryption of a plaintext block under an already established (key,nonce) pair, and can use the resulting ciphertext to determine what plaintext to request next. We have found no way to use such an attack against Helix. As with the discussion of static analysis, above, the large unknown and untouchable state, and the continual mixing of key material into that state, appear to defeat attempts to use control over one input of the block function to control other parts of its state. Additionally, the usage restrictions on Helix do not allow reuse of nonces, which ensures that the state is always a “moving target.” B.6
Chosen Input Differential Attacks
One powerful mode of attack is for the attacker to make small changes in the input values and look at how the changes propagate through the cipher. In Helix, this can be done only with the key or the nonce. In each case, the block function is applied multiple times to the input. In Helix all the places where such attacks are possible we have eight consecutive blocks without any output. A change to the nonce, such as is considered in [DGV93], will be thoroughly mixed into the state by the time the first key stream word is generated. Similarly, a change to the last plaintext byte is thoroughly mixed into the state before the first MAC tag word is generated. A differential attack would have to use a differential through 8 blocks, or 160 rounds of Helix. A search found no useful differentials for 8 blocks of Helix, nor useful higher-order differentials. 4
State collisions where the key and nonce are the same and the block number differs only in the upper 30 bits also do not lead to an attack.
Helix: Fast Encryption and Authentication
345
Fig. 3. A round of Single-Helix.
B.7
Algebraic Attacks Over GF(2)
The only reasonable line of attack we have found so far is to apply equationsolving techniques. In 2002, XSL was used to analyse block ciphers [CP02]. An attack on Serpent seems to be marginally better than brute force, another attack on the AES is slower than brute force. Similar techniques have been used to successfully analyse stream ciphers [Cou02,Arm02]. We have tried to analyse Helix by algebraic techniques. Under an optimistic assumption (from the attacker’s point of view) on the number of linearindependent equations, the best attack we could think of requires solving an
346
Niels Ferguson et al.
(overdefined) system of ≈ 249.7 linear equations in N = 249.1 binary variables. Gaussian elimination needs N 3 ≈ 2147.3 steps, and falls well outside our security bound. [CP02] suggest to use another algorithm, which takes O(N 2.376 ) steps, but with an apparently huge proportional constant. In our case N 2.376 ≈ 2116.7 , so even a relatively small proportional constant pushes this beyond our security bound5 . Our analysis has not resulted in an attack that requires less work than 2128 block function evaluations, and we conjecture that no such attack exists.
C
Single Helix
Most ciphers are analysed by first creating simplified versions and attacking those. Apart from the obvious methods of simplifying Helix we present Single Helix as an object for study. Single Helix uses only one helix instead of two interleaved ones, and has significantly slower diffusion in the backwards direction. A block of single Helix is shown in Figure 3. This uses an alternative configuration where the key and plaintext inputs are added directly to the state words.
5
Due to space constraints, we left out a more detailed description of the attack.
PARSHA-256 – A New Parallelizable Hash Function and a Multithreaded Implementation Pinakpani Pal and Palash Sarkar Cryptology Research Group ECSU and ASU Indian Statistical Institute 203, B.T. Road, Kolkata India 700108 {pinak,palash}@isical.ac.in
Abstract. In this paper, we design a new hash function PARSHA-256. PARSHA-256 uses the compression function of SHA-256 along with the Sarkar-Schellenberg composition principle. As a consequence, PARSHA256 is collision resistant if the compression function of SHA-256 is collision resistant. On the other hand, PARSHA-256 can be implemented using a binary tree of processors, resulting in a significant speed-up over SHA-256. We also show that PARSHA-256 can be efficiently implemented through concurrent programming on a single processor machine using a multithreaded approach. Experimental results on P4 running Linux show that for long messages the multithreaded implementation is faster than SHA-256. Keywords: hash function, SHA-256, parallel algorithm, binary tree.
1
Introduction
A collision resistant hash function is a basic primitive of modern cryptography. One important application of such functions is in “hash-then-sign” digital signature protocols. In such a protocol a long message is hashed to produce a short message digest, which is then signed. Since hash functions are typically invoked on long messages, it is very important for the digest computation algorithm to be very fast. Design of hash functions has two goals – collision resistance and speed. For the first goal, it is virtually impossible to describe a hash function and prove it to be collision resistant. Thus we have to assume some function to be collision resistant. It seems more natural to make this assumption when the input is a short string rather than a long string. On the other hand, the input to a practical hash function can be arbitrarily long. Thus one has to look for a method of extending the domain of a hash function in a secure manner, i.e., the hash function on the larger domain is collision resistant if the hash function on the smaller domain is collision resistant. In the literature, the fixed domain hash function which
This work has been supported partially by the ReX program, a joint activity of USENIX Association and Stichting NLnet.
T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 347–361, 2003. c International Association for Cryptologic Research 2003
348
Pinakpani Pal and Palash Sarkar
is assumed to be collision resistant is called the compression function and the method to extend the domain is called the composition principle. A widely used composition principle is the Merkle-Damg˚ ard (MD) principle introduced in [3, 7]. To the best of our knowledge most known practical algorithms like MD5, SHA family, RIPEMD-160 [4], etc. are built using the MD composition principle. These functions vary in the design of the compression function. In fact, most of the work on practical hash function design has concentrated on the design of the compression function. This is due to the fact that the known attacks on practical hash functions are actually based on attacks on the compression function. See [9] for a survey and history of hash functions. As a result of the intense research on the design of compression function, today there are a number of compression functions which are widely believed to be collision resistant. Some examples are the compression functions of RIPEMD-160, SHA-256, etc. As mentioned before, the other aspect of practical hash functions is the speed of the algorithm to compute the digest. One way to improve the speed is to use parallelism. Parallelism in the design of hash funcions has been studied earlier. The compression function of RIPEMD-160 has a built in parallel path [4]. In [2] the parallelism present in the compression function of the SHA family is studied. In a recent work, [8] studies the efficiency of parallel implementation of some dedicated hash functions. In [6], Knudsen and Preneel describe a parallelizable construction of secure hash function based on error-correcting codes. Another relevant paper is a hash function based on the FFT principle [11]. Also [1] describe an incremental hash function, which is parallelizable. However, most of the work seems to be concentrated on exploiting parallelism in the compression function. The other way to achieve parallelism is to incorporate it in the composition principle. One such work based on binary trees is by Damg˚ ard [3]. However, the algorithm in [3] is not practical since the size of the binary tree grows with the length of the message. One recent paper which describes a practical parallel composition principle is the work by Sarkar and Schellenberg [10]. In this paper we design a collision resistant hash function based on the SarkarSchellenberg (SS) composition principle. To actually design a hash function, it is not enough to have a secure composition principle; we must have a “good” compression function. As mentioned before, research in the design of compression functions have given us a number of such “good” functions. Thus one way to design a new hash function is to take the SS composition principle and an already known “good” compression function and combine them to obtain the new hash function. This new function will inherit the collision resistance from the compression function and the parallelism from the composition principle. Note that the parallelism in the composition principle is in addition to any parallelism which may be present in the compression function. Thus the studies carried out in [8, 2] on the parallel implementation of the standard hash functions are also relevant to the current work. In this paper we use this idea to design a new hash function – PARSHA-256 – which uses the SS composition principle along with the compression function of SHA-256.
PARSHA-256 – A New Parallelizable Hash Function
349
PARSHA-256 can be implemented in both sequential and parallel manner. A fully parallel implementation of PARSHA-256 will provide a significant speed-up over SHA-256. However, for widespread software use, full parallel implementation might not always be possible. We still want our hash function to be used – without significantly sacrificing efficiency. One approach is to simulate the binary tree of processors with a binary tree of lesser height. Details of this simulation algorithm can be found in [10]. Another approach is to use concurrent programming using threads to simulate the parallelism. We provide a multithreaded implementation of PARSHA-256. The SS composition principle is based on a binary tree of processors. In each round some or all of the processors work in parallel and invoke the compression function. The entire algorithm goes through several such parallel rounds. Our strategy is to simulate the processors using threads. The simulation is round by round, i.e., for each parallel round a number of threads (corresponding to the number of processors for that round) are started. All the threads execute the compression function in a concurrent manner. Also the inputs to the threads are different. The simulation of a round ends when all the threads have completed their tasks. This is repeated for all the parallel rounds. Experimental results on P4 running Linux show that for long messages the above strategy of concurrent execution leads to a speed-up over SHA-256. This speed-up varies with the length of the message and the size of the binary tree. Thus we obtain a new hash function which is collision resistant if the compression function of SHA-256 is collision resistant; is significantly faster than SHA-256 if implemented in a full parallel manner, and on certain single processor platforms for long messages is still faster than SHA-256 if implemented as a concurrent program using threads.
2
Compression Function and Processor Tree
We describe our choice of the compression function and the processor tree used for the composition principle. 2.1 Choice of Compression Function Let h() be the compression function for SHA-256. The input to h() consists of the following two quantities: (1) A : Sixteen 32-bit words and (2) B : Eight 32-bit words. In the intermediate stages, A is obtained from the message and B is the intermediate hash value. The output of h() consists of eight 32-bit words. Thus the input to h() is 768 bits and the output of h() is 256 bits. In the rest of the paper we will use n = 768 and m = 256. In our algorithm, the inputs to h() will be formed differently. However, we do not change the definition of h() and hence the assumption that h() is collision resistant remains unchanged. 2.2
Processor Tree
We will use a binary tree of processors. For t > 0, we define the processor tree Tt of height t in the following manner: There are 2t processors, numbered P0 , . . . , P2t −1 . For 0 ≤ i ≤ 2t−1 − 1, the children of processor Pi are
350
Pinakpani Pal and Palash Sarkar
P2i and P2i+1 . The arcs point towards parents, i.e., the arc set of Tt is At = {(P2i , Pi ), (P2i+1 , Pi ) : 0 ≤ i ≤ 2t−1 − 1}. Thus the arcs coming into P0 are from P1 and P0 itself. We define I = {0, . . . , 2t−1 − 1}, L = {2t−1 , . . . , 2t − 1} and P = {0, . . . , 2t − 1}. Figure 1 shows T3 .
- fP0 6 fP1 7 S o S
fP 2 A KA P A fP f 4 5
S fP 3 A KA P A fP f 6
7
Fig. 1. Processor Tree with t = 3.
The input to the processors are binary strings and the behaviour of any processor Pi is described as follows: Pi (y) = h(y) if |y| = n; (1) =y otherwise. Thus Pi invokes the hash function h() on the string y if the length of y is n; otherwise it simply returns the string y. We note that in the digest computation algorithm the length of y will always be n, m or 0.
3
A Special Case
In this section, we describe a special case with a suitable message length and a processor tree with t = 3 and without the use of initialization vector. The purpose of this description is to highlight the basic idea behind the design. In Section 4, we provide the complete specification of PARSHA-256. The special case described here is intended to help the reader to better appreciate the different parameters of the general specification. Let x be the message to be hashed with length L = 2t (p + 2)(n − m) − (n − 2m) for some integer p ≥ 0. Consider the processor tree T3 having processors P0 , . . . , P7 . During the hash function computation, the message x will be broken up into disjoint substrings of lengths n or n − 2m. These substrings will be provided as input to the processors in the different rounds. Let us denote by u0 , . . . , u7 the substrings of x which are provided to the processors P0 , . . . , P7 in a particular round. The computation will be done in (p+4) parallel rounds. In each round some or all of the processors work in parallel and apply the compression function to its input to obtain its output. Let us denote by z0 , . . . , z7 respectively the outputs of the processors P0 , . . . , P7 in a particular round. The description of the rounds is as follows.
PARSHA-256 – A New Parallelizable Hash Function
351
1. In round 1, each processor Pj , with 0 ≤ j ≤ 7, gets as input an n-bit substring uj of the message x and produces an m-bit output zj . 2. In rounds 2 to (p + 1) the computation proceeds as follows. (a) Processors P0 , . . . , P3 each get an (n − 2m)-bit substring of the message x. These substrings are u0 , . . . , u3 . Processor Pj (0 ≤ j ≤ 3) concatenates the m-bit strings z2j and z2j+1 of the previous round to uj to form an n-bit input. For example, P0 concatenates z0 , z1 to u0 ; P1 concatenates z2 , z3 to u1 and so on. Note that all the intermediate hash values z0 , . . . , z7 of the previous rounds are used up. (b) Processors P4 , . . . , P7 each get an n-bit substring of the message as input, i.e., the strings u4 , . . . , u7 are all n-bit strings. (c) Each of the processors invoke the compression function on their n-bit inputs to produce an m-bit output. 3. In round (p + 2), processors P0 , . . . , P3 each get an (n − 2m)-bit string, i.e., the strings u0 , . . . , u3 are each (n − 2m)-bit strings. None of the processors P4 , . . . , P7 get any input. Each processor Pj (0 ≤ j ≤ 3) then forms an n-bit string as described in item 2a above. These strings are hashed to obtain m-bit outputs z0 , . . . , z3 . 4. In round (p + 3), processors P0 and P1 each get an (n − 2m)-bit string. (The other processors do not get any input.) These processors then form n-input using the strings z0 , . . . , z3 as before. The n-bit strings are hashed to produce two m-bit strings z0 and z1 . 5. In round (p + 4) only processor P0 gets an (n − 2m)-bit string. The mbits outputs z0 , z1 of the round (p + 3) are concatenated to this (n − 2m)-bit string to form an n-bit input. This input is hashed to obtain the final message digest. Figure 2 shows the working of the algorithm. Note that the total number of bits that is hashed is equal to 2t (n) + p(2t−1 (n − 2m) + 2t−1 (n)) + (n − 2m)(2t−1 + · · · + 1). A routine simplification shows that this is equal to the length L of the message x. Hence the entire message is hashed to produce the m-bit message digest. Now we consider the modifications required to handle the general situation. 3.1
Arbitrary Lengths
The message length that we have chosen is of a particular form. In general we have to tackle arbitrary length messages. This requires that the original message be padded with 0’s to obtain the length in a desirable form. 3.2
Processor Tree
The special case described above is for t = 3. Depending upon the availability of resources, one might wish to use a larger tree. We have provided the specification
352
Pinakpani Pal and Palash Sarkar u 0 eP 0 u 1 eP 1 u 2 eP 2
u 3 eP 3
eP4 eP5 eP6 eP7 u4 6 u5 6 u6 6 u7 6
P0 u 0 e z0 z 61 u 1 eP 1 7 S o z2 S z3 u u 2 e S3 eP3 P2 A K z5 z 6 A K z7 z4 eP4 A eP5 eP6 A eP7 u4 6 u5 6 u6 6 u7 6 Rounds 2 to (p + 1)
First Round P0 u 0 e z0 z 61 u 1 eP 1 7 S o z2 S z3 u u 2 e S3 eP3 P2 A K z5 z 6 A K z7 z4 eP4 A eP5 eP6 A eP7
P0 u 0 e z0 z 61 u 1 eP 1 7 S o z2 S z3 S eP3 e P2
eP4 eP5 eP6 eP7
Round (p + 2)
Round (p + 3)
P0 u 0 e z0 H H z 61 eP1
eP2
eP3
eP4 eP5 eP6 eP7 Last Round
Fig. 2. Example for the special case.
of PARSHA-256 using the tree height as a parameter. Suppose the height of the available processor tree is T . However, the length of the message might not be large enough to utilize the entire processor tree. In this case, one has to utilize a subtree of height t ≤ T , which we call the effective height of the processor tree. 3.3
Initialization Vector
The description of the special case does not use an initialization vector (IV). As a result there are invocations of the compression function where the input is formed entirely from the message bits. This implies that any collision for the compression function immediately provides a collision for the hash function. To avoid this situation, one can use an initialization vector as part of the input to the invocations of the compression function. This ensures that to find a collision for the hash function, one has to find a collision for the compression function where a portion of the input is fixed. Using an IV is relatively simple in the Merkle-Damg˚ ard composition scheme. The IV has to be used only for the first invocation of the compression function. For the tree based algorithm, the IV has to be used at several points. It has to be used for all invocations of the compression function in the first round and all invocations of the compression function by leaf level processors in the subsequent rounds. The disadvantage of using an IV is the fact that the number of invocations of the compression functions increases. Further, this value increases
PARSHA-256 – A New Parallelizable Hash Function
353
as the length of the IV increases. To allow more flexibility to the user we provide for three different possible lengths for the IV. The effect of the length of IV on the number of parallel rounds and the number of invocations of the compression function is discussed in Section 5.
4
PARSHA-256 Specification
In this section we provide the detailed technical specification of the new hash function PARSHA-256. This includes the padding, formatting of the message and the digest computation algorithm. The choice of the compression function and the SS composition principle is also a part of these specifications. 4.1
Parameters and Notation
n = 768 and m = 256. Compression function h : {0, 1}n → {0, 1}m . Message x having length |x| = L bits. Height of available processor tree is T . Effective height of processor tree is t. Initialization vector IV having length |IV| = l ∈ {0, 128, 256}. Functions δ(i) and λ(i): δ(i) = 2i (2n − 2m − l) − (n − 2m); λ(i) = 2i−1 (2n − 2m − l). 8. q, r and b are defined from L and t as follows: If L > δ(t), then write L − δ(t) = qλ(t) + r, where r is the unique integer from the set {1, . . . , λ(t)}. If L = δ(t), then q = r = 0.
1. 2. 3. 4. 5. 6. 7.
r 9. b = 2n−2m−l . 10. Number of parallel rounds : R = q + t + 2. 11. The empty string will be denoted by NULL.
The initialization vector IV of length l is specified as followed. The specification of SHA-256 describes a 256-bit initialization vector IV. If l = 256, then IV = IV; if l = 128, then IV is the first 128 bits of IV; and if l = 0, then IV = NULL. 4.2
Formatting the Message
The message x undergoes two kinds of padding. In the first kind of padding, called end-padding, zeros are appended to the end of x to get the length of the padded message in a certain form. This padding is defined in Step 5 of PARSHA256 in Section 4.4. The other kind of padding is what we call IV-padding. If l > 0, then IV-padding is done to ensure that no invocation of h() gets only message bits as input. We now describe the formatting of the message into substrings. Let the endpadded message be written as U1 ||U2 || . . . ||UR , where for 1 ≤ i ≤ R − 1, Ui = ui,0 || . . . ||ui,2t −1 and ui,j , UR are strings of length 0, n − 2m or n − l as defined in Equation (2).
354
Pinakpani Pal and Palash Sarkar
n − l if (i = 1) or (2 ≤ i ≤ q + 1 and j ∈ L); n − l if i = q + 2 and 2t−1 ≤ j ≤ 2t−1 + b − 1; 0 if i = q + 2 and 2t−1 + b ≤ j ≤ 2t ; if q + 2 < i < R and i ∈ L; |ui,j | = 0 n − 2m if 2 ≤ i ≤ q + 2 and j ∈ I; n − 2m if q + 2 < i < R and 0 ≤ j ≤ Ki − 1; if q + 2 < i < R and Ki ≤ j ≤ 2t−1 − 1. 0 n − 2m if b > 0; |UR | = 0 otherwise.
(2)
Here Ki = 2s−1 + ks (3)
t−s−1 +b−1 where s = R − i and ks = 2 2t−s . For 1 ≤ i < R and 0 ≤ j ≤ 2t − 1, the input to processor Pj in round i is a string vi,j and the output is zi,j . These strings are defined as follows. zi,j = Pj (vi,j ); vi,j = ui,j ||IV if i = 1 or j ∈ L; (4) = zi−1,2j ||zi−1,2j+1 ||ui,j if 1 < i < R and j ∈ I. For l = 0, the correctness of the above formatting algorithm can be found in [10]. The same proof also holds for the case l > 0 and hence we do not repeat it here. 4.3
Computation of Digest
The digest computation algorithm is described as follows. ComputeDigest(x, t) Inputs : message x and effective tree height t. Output : m-bit message digest. 1. 2. 3. 4. 5. 6. 7. 8.
for 1 ≤ i ≤ R − 1 for j ∈ P do in parallel zi,j = Pj (vi,j ); enddo; enddo; if b > 0 then w = P0 (zR−1,0 ||zR−1,1 ||UR ); else w = zR−1,0 ; z = h(w||binn−m (L)); return z.
The function bink (i) is defined in the following manner: For 0 ≤ i ≤ 2k − 1, bink (i) denotes the k-bit binary representation of i. Remark 1. In rounds 1 to (q + 1) all the processors invoke the compression function on their n-bit inputs. However, in rounds (q + 2) to (q + t + 1) only some of the processors actually invoke the compression function. The compression function is invoked only if the input to the processor is an n-bit string. Otherwise
PARSHA-256 – A New Parallelizable Hash Function
355
the processor simply outputs its input (see equation (1)). This behaviour of the processors is controlled by the formatting of the message. The precise details are as follows: Let i be the round number and s = R−i. In round i = q+2, processors P0 , . . . , P2t−1 +b−1 invoke the compression function; in round q +2 < i < q +t+2, processors P0 , . . . , PKi −1 invoke the compression function, processor Pj (if any) where 2s−1 + ks − 1 < j < 2s−1 + ls − 1, ls = ((b + 2t−s − 1)/2(t−s) ) simply outputs its m-bit input. All other processors in these rounds are inactive. 4.4
Digest Generation and Verification
We are now in a position to define the digest of a message x. Suppose that we have at our disposal a processor tree of height T . Then the digest z of x is defined in the following manner. PARSHA-256(x, T ) Inputs : message x and height T of available binary tree. 1. 2. 3.
4. 5. 6. 7.
if L ≤ δ(0) = n − l, then return h(h(x||0n−l−L ||IV)||binn−m (L)); if δ(0) < L < δ(1), then x = x||0δ(1)−L ; L = δ(1); Determine t as follows : t = T if L ≥ δ(T ); = i if δ(i) ≤ L < δ(i + 1), 1 ≤ i < T ; Determine q, r and b from L and t; (see Section 4.1) x = x||0b(2n−2m−l)−r ; z = ComputeDigest(x, t); output (t, z).
Clearly the digest z depends upon the height of the tree t. Hence along with z, the quantity t is also provided as output. Note that the height t of the tree used to produce the digest may be less than the height T of the tree that is available. The reason for this is that the message length L may not be long enough to utilize the entire tree. Thus t is the effective height of the tree used to compute the digest. During verification, Step 3 of PARSHA-256 is not executed, since the effective height of the tree is already known. This raises the following question: What happens if the verifier does not have access to a tree of height t? In [10], it is shown that any digest produced using a tree of height t can also be produced using a tree of height t with 0 ≤ t < t. The same algorithm will also work in the present case and hence we do not repeat it here. Moreover, in this paper we provide a multithreaded implementation of algorithm ComputeDigest() where the processors are implemented using threads. This also shows that access to a physical processor tree is not necessary for digest computation.
5
Theoretical Analysis
In this section we perform a theoretical analysis of collision resistance and speedup of PARSHA-256. The speed-up is with respect to SHA-256, which is built using the Merkle-Damg˚ ard composition principle.
356
5.1
Pinakpani Pal and Palash Sarkar
Collision Resistance
We first note that the composition scheme used in the design of PARSHA-256 is the parallel Sarkar-Schellenberg scheme described in [10]. Hence we have the following result. Theorem 1 (Sarkar-Schellenberg [10]). If the compression function h() of SHA-256 is collision resistant then so is PARSHA-256. If no initialization vector is used, i.e., if l = 0, then the ability to obtain a collision for h() immediately implies the ability to obtain a collision for PARSHA-256. Hence we can state the following result. Theorem 2. If l = 0, then h() is collision resistant if and only if PARSHA-256 is collision resistant. What happens if l > 0? In this situation the initialization vector IV is non-trivial. The intuitive idea is to increase the collision resistance of the hash function beyond that of the compression function. If there is no IV, then a collision for the compression function immediately leads to a collision for the hash function. However, if an IV is used, then the adversary has to find a collision for the compression function under the condition that a certain portion of the input is fixed. Intuitively, this could be a more difficult task for the adversary. On the other hand, Dobbertin [5] has shown that for MD4 the use of IV does not lead to any additional protection. Still the use of IV is quite common in hash function specification and hence we also include it in the specification of PARSHA-256. 5.2
Speed-Up over SHA-256
Let the end-padded length L of the message x be such that L = γ(n − m) for some positive integer γ. To hash a message x of length L, SHA-256 requires γ invocations of h() and hence the time required to hash x is γTh , where Th is the time required by one invocation of h(). (We are ignoring the last invocation of h() where the length of the message is hashed; this step is required by both SHA-256 and PARSHA-256.) We now compare this to the number of invocations of h() and the number of parallel rounds required by PARSHA-256. Recall from Section 4.1 that the number of parallel rounds required by PARSHA-256 is R. Thus the time required for parallel execution of PARSHA256 is RTh . The number of invocations of h() by PARSHA-256 is same as the number of invocations of h() by PHA in [10]. Proposition 1. The number of invocations of h() by PARSHA-256 on a message of length L is equal to (q + 2)2t + 2b − 2. The parameters q and b depend on L, t, l, n and m. We have the following result. Proposition 2.
L λ(t)
−1 0, the number of invocations made by PARSHA-256 is more than that made by SHA-256. This is due to the use of IV. Thus a strictly sequential simulation of PARSHA-256 will require more time than SHA-256. In the next section, we show that for long messages a multithreaded implementation of PARSHA-256 on a single processor machine can lead to a speed-up over SHA-256.
6
Multithreaded Implementation
We implement the algorithm to compute PARSHA-256 using threads. The processors are implemented using threads and the simultaneous operation of the
358
Pinakpani Pal and Palash Sarkar
processors is simulated by concurrent execution of the respective threads. There are R parallel rounds in the algorithm. Each round consists of two phases – a formatting phase and a hashing phase. In the formatting phase, the inputs to the processors are formed using the message and the outputs of the previous invocations of h(). Once this phase is completed, the hashing phase starts. In the hashing phase all the processors operate in parallel to produce the output. We use two buffer sets – the input and the output buffer sets. The input buffer set consists of 2t strings of length n each. Similarly, the output buffer set consists of 2t strings of length m each. Thus each processor has its own input buffer and output buffer. In the formatting phase, the input buffer sets are updated using the message and the output buffers. In the hashing phase, the input buffers are read and the output buffers are updated. During implementation we declare the buffers to be global variables. This avoids unnecessary overhead during thread creation. The formatting phase prepares the inputs to all the processors. This phase is executed in a sequential manner. That is, first the input to processor P0 is prepared, then the input to processor P1 is prepared and so on for the required number of processors. After the formatting phase is complete, the hashing phase is started. The exact details of processor invocation are given as follows. Rounds 1 to q + 1 : P0 , . . . , P2t −1 each invoke the compression function. Round q + 2 : P0 , . . . , P2t−1 +b−1 each invoke the compression function. Round i with q + 2 < i < R : P0 , . . . PKi −1 each invoke the compression function. Round R : if b > 0, then P0 invokes the compression function. Here Ki is as defined in Equation (3). Note that in rounds q + 2 to R at most one processor may additionally output the m-bit input that it receives in the previous round. (See Remark 1 for further explanation.) Each processor is simulated using a thread. In the hashing phase of each round, the required threads are started. Each thread is given an integer j, which identifies the processor number and hence the input and the output buffers. Also each thread gets the address of the start location of the subroutine h(). The subroutine h() is implemented in a thread safe manner, so that conflict free concurrent execution of the same code is possible. The management strategy for the input and output buffers described above ensure that there is no read/write conflict for the buffers even during concurrent execution. The hashing phase is completed only when all the started threads successfully terminate. This also ends one parallel round of the algorithm. Finally the algorithm ends when all the parallel rounds are completed. There is another way in which concurrent execution can be further utilized. As described before there are two phases of each round – the reading/formatting phase and the hashing phase. It is possible to introduce concurrency in these two phases in the following manner. Suppose the system is in the hashing phase of
PARSHA-256 – A New Parallelizable Hash Function
359
Table 2. Details of different test platforms. Number of CPU Processor Processor Speed Main Memory OS
Silicon Graphics O2 P4 1 1 MIPS R12000A Intel Pentium 4 400MHz 1.40 GHz 512 MB 256 MB IRIX 6.5 RedHat Linux 8.0
a particular round. At this point it is possible to concurrently execute the reading/formatting phase of the next round. The advantage is that in the next round the hashing phase can be started immediately, since the reading/formatting phase of this round has been completed concurrently with the hashing phase of the previous round. In situations where memory access is slow, this method will provide speed improvements. On the other hand, to avoid read/write conflict, we have to use two sets of buffers, leading to a more complicated buffer management strategy. For our work, we have chosen not to implement this idea.
7
Experimental Results
First we note that if PARSHA-256 is simulated sequentially then the time taken is proportional to the number of invocations of h(). From Table 1, we know that for a strict sequential execution PARSHA-256 will be roughly as fast as SHA-256 when l = 0 and PARSHA-256 will be 1 times slower than SHA-256 when l > 0. IF Also for full parallel implementation the speed-up of PARSHA-256 over SHA-256 is determined by the factor RF in Table 1. In this section we compare the performance of the multithreaded implementation of PARSHA-256 with SHA-256. The experiments have been carried out on two platforms (see Table 2). The algorithms have been implemented in C and the same code was executed on both the platforms. The order of bytes in a long on the two platforms are different; this was the only factor taken into account while running the program. Remark 2. The compression function h() for SHA-256 is also the compression function of PARSHA-256. We implemented h() as a subroutine and this subroutine was invoked by both SHA-256 and PARSHA-256. Thus the comparison of the two implementation is really a comparison of the two composition principles. Any improvement in the implementation of the compression function h() will improve the speed of both SHA-256 and PARSHA-256 but the comparative performance ratio would roughly remain the same. To provide a common platform for comparison, the same background machine load was maintained for the execution of both SHA-256 and PARSHA-256. For comparison purposes we have calculated the difference in clock() between the start and end of the program for both SHA-256 and PARSHA-256. Extensive experiments were carried out for comparison purpose and a summary of the main points is as follows.
360
Pinakpani Pal and Palash Sarkar
– On P4 running Linux, the following was observed. For long messages of around 1 Mbyte or more, the multithreaded implementation of PARSHA-256 was faster by a factor of 2 to 3 for all values of l. – On SG, the speed of both the algorithms was roughly same for l = 0 and 128. For l = 256, the speed of PARSHA-256 was roughly 0.85 times the speed of SHA-256. – For short messages, the multithreaded implementation was slower. This is possibly due to higher thread management overhead. – The gain in speed decreases as l increases. This is due to the increase in the number of invocations of the compression function as shown in Table 1. – The gain in speed increases with increase in message length. However, the rate of increase is slow. As an outcome of our experiments, we can conclude that on P4 running Linux and for long messages, the multithreaded implementation of PARSHA-256 is roughly 2 to 3 times faster than SHA-256.
8
Conclusion
In this paper, we have presented a new hash function PARSHA-256. The hash function is built using the SS composition principle and the compression function of SHA-256. Since the SS composition principle is parallelizable, our hash function is also parallelizable. A full parallel implementation of PARSHA-256 will show a significant speed-up over SHA-256. In this paper, we have described a concurrent implementation of PARSHA-256 on a single processor machine. Experimental results show that for long messages the concurrent implementation is still faster than SHA-256. The basic idea explored in the paper is that it is possible to obtain secure and parallelizable hash functions by combining the SS composition principle with a “good” compression function. We have done this using the compression function of SHA-256. Using other “good” compression functions like RIPEMD-160 or other SHA variations will also yield new and fast parallel hash functions. We believe this task will be a good research/industrial project with many practical applications.
Acknowledgement We would like to thank the reviewers of the paper for their detailed comments, which helped to considerably improve the description of the hash function.
References 1. M. Bellare and D. Micciancio. A New Paradigm for Collision-Free Hashing: Incrementality at Reduced Cost. Lecture Notes in Computer Science, (Advances in Cryptology - EUROCRYPT 1997), pages 163-192.
PARSHA-256 – A New Parallelizable Hash Function
361
2. A. Bosselaers, R. Govaerts and J. Vandewalle, SHA: A Design for Parallel Architectures? Lecture Notes in Computer Science, (Advances in Cryptology - Eurocrypt’97), pages 348-362. 3. I. B. Damg˚ ard. A design principle for hash functions. Lecture Notes in Computer Science, 435 (1990), 416-427 (Advances in Cryptology - CRYPTO’89). 4. H. Dobbertin, A. Bosselaers and B. Preneel. RIPEMD-160: A strengthened version of RIPEMD. Cambridge Workshop on Cryptographic Algorithms, 1996, LNCS, vol 1039, Springer-Verlag, Berlin 1996, pp 71-82. 5. H. Dobbertin. Cryptanalysis of MD4. Journal of Cryptology, 11(4): 253-271 (1998). 6. L. Knudsen and B. Preneel. Construction of Secure and Fast Hash Functions Using Nonbinary Error-Correcting Codes. IEEE Transactions on Information Theory, vol. 48, no. 9, September 2002, pp 2524–2539. 7. R. C. Merkle. One way hash functions and DES. Lecture Notes in Computer Science, 435 (1990), 428-226 (Advances in Cryptology - CRYPTO’89). 8. J. Nakajima, M. Matsui. Performance Analysis and Parallel Implementation of Dedicated Hash Functions. Lecture Notes in Computer Science, (Advances in Cryptology - EUROCRYPT 2002), pp 165-180. 9. B. Preneel. The state of cryptographic hash functions. Lecture Notes in Computer Science, 1561 (1999), 158-182 (Lectures on Data Security: Modern Cryptology in Theory and Practice). 10. P. Sarkar and P. J. Schellenberg. A Parallelizable Design Principle for Cryptographic Hash Functions. IACR e-print server, 2002/031, http://eprint.iacr.org. 11. C. Schnorr and S. Vaudenay. Parallel FFT-Hashing. Lecture Notes in Computer Science, Fast Software Encryption, LNCS 809, pages 149-156, 1994.
A
Test Vector
Our implementation of PARSHA-256 is available at http: www.isical.ac.in/˜crg/software/parsha256.html. The test vector that we use is the string : (abcdef gh)128 . (Note that the corresponding files for little and big endian architectures are going to be different.) Each of the characters represent a byte and the entire string is of length 1 Kbyte. We run PARSHA-256 for t = 3 and for l = 0, 128 and 256. Denote the resulting message digests by d1 , d2 and d3 . Each di is a 256 bit value and we give the hex representations below. d1 is as follows. 4d4c2b13 3e516dc1 35065779 536fd4bf 74f98189 bc6b2a92 10803d38 77e3b656 d2 is as follows. e554c47b 1538c9db 5cbff219 2d620fd3 ae21d04a 5ae6fa50 150888cc da6cf783 d3 is as follows. 459142c5 fcd6eff6 839d6740 177b54d5 2e8bc987 a7438438 a588441a 7113e8d3
Practical Symmetric On-Line Encryption Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard DCSSI Crypto Lab 51 Boulevard de La Tour-Maubourg 75700 Paris 07 SP, France {Pierre-Alain.Fouque,Gwenaelle.Martinet}@ens.fr
[email protected] Abstract. This paper addresses the security of symmetric cryptosystems in the blockwise adversarial model. At Crypto 2002, Joux, Martinet and Valette have proposed a new kind of attackers against several symmetric encryption schemes. In this paper, we first show a generic technique to thwart blockwise adversaries for a specific class of encryption schemes. It consists in delaying the output of the ciphertext block. Then we provide the first security proof for the CFB encryption scheme, which is naturally immune against such attackers. Keywords: Symmetric encryption, blockwise adversary, chosen plaintext attacks.
1
Introduction
Modes of operation are well-known techniques to encrypt messages longer than the output length of a block cipher. The message is first cut into blocks and the mode of operation allows to securely encrypt the blocks. The resulting construction is called an encryption scheme. Specific properties are achieved by some of these modes: self-synchronization, ensured by chained modes such as CBC and CFB [11], or efficient encryption throughput, ensured by parallelized modes such as ECB and OFB [11]. Two different techniques are mainly used to build these schemes. The first one directly outputs the block of the block cipher (ECB, CBC). The second method uses the block cipher to generate random strings which are then XORed with the message blocks (CTR [1], OFB, CFB). In this paper we investigate the security of the classical modes of operation in a more realistic and practical scenario than previous studies. In cryptography, security is usually defined by the combination of a security goal and an adversarial model. The security goal of an encryption scheme is privacy. Informally speaking, privacy of an encryption scheme guarantees that, given a ciphertext, an adversary is not able to learn any information about the corresponding plaintext. Goldwasser and Micali have formalized this notion in [5] where it has been called the semantic security. An equivalent definition called indistinguishability of encryptions (IND) has also been more extensively studied in [1] for the symmetric encryption setting: given two equal length messages M0 T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 362–375, 2003. c International Association for Cryptologic Research 2003
Practical Symmetric On-Line Encryption
363
and M1 chosen by the adversary and the encryption C of one of them, it is difficult for the adversary to distinguish whether C is the encryption of M0 or M1 . In practical scenarii, adversary goals can be different from this theoretical notion of privacy. For example, the adversary can try to recover the secret key or to recover the plaintext underlying a given ciphertext. However, from a security point of view, if the scheme is secure under the IND security notion, key recovery or plaintext recovery cannot be achieved by the adversary. It is worth noticing that a security proof for encryption mode is not an absolute proof of security. As often in cryptography, proofs are made by reduction, in the complexity theoretic sense, between the security of the scheme and the security of the block cipher used in the encryption scheme. In practice, such a proof shows that the mode achieves the security goal assuming the security of the underlying block cipher. Orthogonally to the security goal, the adversarial model defines the adversary abilities. The considered adversarial models are known plaintext attacks, chosen plaintext attacks (CPA) or chosen ciphertext attacks (CCA). In these scenarii, the adversaries have access to an encryption oracle, queried with known or chosen messages, and/or a decryption oracle, queried with ciphertexts, that may be chosen according to the previous pairs of plaintexts and ciphertexts. In the sequel we consider schemes secure against Chosen Plaintext Attacks, such as CBC or CFB. We do not take into account schemes secure against Chosen Ciphertext Attacks, such as OCB [12], IACBC, IAPM [9] or XCBC [4]. Usually, it is implicitly assumed that messages sent to the encryption oracle are atomic entities. However, in the real world, the encryption module can be a cryptographic accelerator hardware or a smart card with limited memory. Thus, ciphertext blocks are output by the module before having received the whole message. Practical applications are thus far from the theoretical security model. Recently, Joux, Martinet and Valette in [8] have proposed to change the adversary interactions with the encryption oracle to better model on-line symmetric encryption schemes. Such a scheme can output the ciphertext block C[i] just after the introduction of the block M [i], without having the knowledge of the whole message. Many modes of operation have this nice property. Therefore, from the attacker side, adversaries in the IND security game can adapt the message blocks according to the previously received ciphertext blocks. The same notion concerning integrity on real-time applications has been used by Gennaro and Rohatgi [3]. The blockwise adversarial model, presented in [8], is used to break the INDCPA security of some encryption schemes, provably secure in the standard model. For example, in order to encrypt a message M = M [1]M [2] . . . M [] with the CBC encryption mode, a random initial vector C[0] = IV is chosen and for all 1 ≤ i ≤ , C[i] = EK (M [i] ⊕ C[i − 1]). In [1], Bellare et al. have shown that, in the standard model, the CBC encryption scheme is IND-CPA secure up to the encryption of 2n/2 blocks, where n denotes the length of the block cipher EK . However, in [8], Joux et al. have shown that the CBC encryption mode cannot be IND secure in the blockwise adversarial model: only two-blocks messages M0 and M1 allow the adversary to win the semantic security game. Indeed, if
364
Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard
the same input is given twice to the block cipher, the same result is output. Consequently in the IND security game, if the adversary knows the initial vector C[0] = IV and the first ciphertext block C[1], he can adaptively choose M0 [2] as C[1] ⊕ C[0] ⊕ M0 [1] and a random value for M1 [2]. Then, if the second ciphertext block C[2] is such that C[2] = C[1], the ciphertext C = C[0]C[1]C[2] is the encryption of M0 . Otherwise it is the encryption of M1 . This attack works since the adversary can adapt his message blocks according to the output blocks. In the standard model, as the messages are chosen before the ciphertext is returned by the oracle, the probability that such a collision occurs in the inputs of the block cipher is upperbounded by µ2 /2n , where µ denotes the number of encrypted blocks with the same key. While µ remains small enough, the probability is negligible and the mode of encryption is secure. From a practical point of view, the blockwise attack on the CBC encryption scheme is as efficient as an attack on the ECB encryption scheme in the standard model. Indeed, for both the ECB mode in the standard model and the CBC mode in the blockwise model, the adversary knows inputs and outputs of the block cipher. For the ECB mode, he can then adapt his messages to force a collision. For the CBC mode, he adapts the message blocks. It is worth noticing that in both cases a key recovery attack on the block cipher is possible. Such an attack only requires the encryption of some chosen plaintext blocks. For example, a dictionary attack on the block cipher can be mounted (see for example [10]). In this kind of attacks, the adversary precomputes the encryption of a plaintext block P under all the keys, and stores them in a table. Therefore, if he knows the encryption of P under the key used in the block cipher, he just looks in his table to recover the secret key. Moreover, the time/memory tradeoff of Hellman [7] can be adapted to reduce the required memory of this attack. Therefore, blockwise attacks need to be taken into account in practical uses since attacks are not only theoretical but paves the way to more practical and serious attacks. Our results. In this paper we study the security of some well known encryption mode against blockwise adversaries. In a first part we show how to secure the CBC encryption mode. The countermeasure we propose simply consists in delaying the output blocks. This modified scheme, called delayed CBC (DCBC), is proved secure against blockwise adaptive adversaries, mounting chosen plaintext attacks. Furthermore, this modification can be applied to secure several modes of operation. In a second part, we show that the CFB (Ciphertext FeedBack) encryption mode is secure without any change in this new model. We also give in appendices a rigorous proof for the security of the DCBC and CFB modes.
2 2.1
Preliminaries Notations
In the sequel, standard notations are used to denote probabilistic algorithms and experiments. If A is a probabilistic algorithm, then the result of running A on inputs x1 , x2 , . . . and coins r will be denoted by A(x1 , x2 , . . . ; r). We let
Practical Symmetric On-Line Encryption
365
y ← A(x1 , x2 , . . . ; r) denote the experiment of picking r at random and letting y be A(x1 , x2 , . . . ; r). If S is a finite set then x ← S is the operation of picking an element uniformly from S. If α is neither an algorithm nor a set then x ← α is a simple assignment statement. We say that y can be output by A if there is some r such that A(x1 , x2 , . . . ; r) = y. If p(x1 , x2 , . . .) is a predicate, the notation Pr[x1 ← S; x2 ← A(x1 , y2 , . . .); . . . : p(x1 , x2 , . . .)] denotes the probability that p(x1 , x2 , . . .) is true after ordered execution of the listed experiments. Recall that a function ε : N → R is negligible if for every constant c ≥ 0 there exists an integer kc such that ε(k) ≤ k −c for all k ≥ kc . The set of all functions from {0, 1}m to {0, 1}n is denoted by Rm→n . The set of all the permutations of {0, 1}n is denoted by Permn . 2.2
Security Model
Security of a symmetric encryption scheme is viewed as indistinguishability of the ciphertexts, when considering chosen plaintext attacks. However, the recent attacks on some schemes, proved secure in the standard model, show that a new adversarial model has to be defined. The new kind of adversaries, introduced in [8], are adaptive during a query, according the previous blocks of ciphertext. The security model has to take into account these adversaries, realistic in an implementation point of view. The difference with the standard model is that here the queries are made on the fly: for each plaintext block received, the oracle outputs a ciphertext block. This better models on-line encryption. Thus, it is natural to consider a new kind of interactions, induced by this model: since the adversary does not send the whole plaintext in a single query, so that he can adapt the next plaintext block according to the ciphertext he receives, one can also suppose that the adversary may interleave the queries. In this case, the attacker is able to query the oracle for the encryption of a new message, even if the previous encryption is not finished. This introduces concurrent queries. The security model is thus modified in depth and security of known schemes has to be carefully re-evaluated in this new model. Formally, in this model, the adversary, denoted by A in the sequel, is given access to a blockwise concurrent encryption left-or-right oracle: this oracle is queried with inputs of the form (M0i [j], M1i [j]), where M0i [j] and M1i [j] are two plaintext blocks. At the beginning of the game, this oracle flips at random a bit b. Then, if b = 0 it will always encrypt M0i [j], and otherwise, if b = 1, it will encrypt M1i [j]. The corresponding ciphertext block Cbi [j] is returned to the adversary, whose goal is to guess which message has been encrypted. Here the queries are made on the fly (for each plaintext block received, the oracle outputs a ciphertext block), and also concurrently (the adversary may interleave the queries). In this case, A is able to query the oracle for the encryption of messages, even if the previous encryption is not finished. This introduces concurrent queries. Thus, bl,c we define the encryption left-or-right oracle, denoted by EK (LR(., ., b, i)), to i i take as input two plaintext blocks M0 [j] and M1 [j] along with the number i of the query, and encrypt Mbi [j]. We now give the formal description of the attack scenario:
366
Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard lorc−bcpa(b)
ExptSE,A
(k)
R
K ← K(k) bl,c d ← AEK (LR(·,·,b,·)) Return d The adversary advantage in winning the LORC-BCPA game is defined as: lorc−bcpa(b) Advlorc−bcpa (k) = 2 · Pr[ExptSE,A (k) = 1] − 1 SE,A We define Advlorc−bcpa (k, t, q, µ) = max{Advlorc−bcpa (k)}, where the maxiSE SE,A A
mum is over all legitimate A having time-complexity t, making to the concurrent oracles at most q encryption queries totaling µ blocks. A secret-key encryption scheme SE is said to be lor-secure against concurrent blockwise adaptive chosen plaintext attack (LORC-BCPA), if for all polynomial-time probabilistic adversaries, the advantage in this guessing game is negligible as a function of the security parameter k. In this case, SE is said LORC-BCPA secure. The security of a block cipher is viewed as the indistinguishability from random permutations, as defined for example in [1]. The attack scenario for the adversary is to distinguish the outputs of a permutation randomly chosen in Permn , from the outputs of a permutation randomly chosen in the family P of all permutations induced by a given block-cipher. The adversary advantage in winning this game is denoted by Advprp P (k, t, q). Following the same idea, the security of a pseudorandom function f randomly chosen in a given family F of functions of input length m and output length n, is the indistinguishability from a random function of Rm→n . The attacker game is the same as above, except that permutations are replaced by functions. The adversary advantage in winning the game in denoted by Advprf F (k, t, q).
3
Blockwise Secure Encryption Schemes
In this section, we propose two modes of encryption that enable to withstand blockwise adversaries. These modes are well-known and simple. The CFB encryption scheme and a variant of the CBC are secure against the powerful adversaries we consider. The complete security proofs are given in appendices and we only summarize in this section the security results and their implications on the use of those modes of encryption. 3.1
A Blockwise Secure Variant of the CBC: The Delayed CBC
Description. The CBC mode of encryption, probably the most currently used in practical applications, suffers from strong weaknesses in the blockwise adversarial model, as it has been shown in [8]. The main reason is that the security of modes of operation is closely related to the probability of collision in the inputs of the underlying block cipher. As shown by the attacks presented in [8], blockwise
Practical Symmetric On-Line Encryption M [1]
M [2]
M [3]
M [4]
M [ − 1]
M []
-
367
Stop
?
IV
? - l ? EK
? - l ? EK
? - l ? EK
?
?
?
?
C[0]
C[1]
C[2]
C[3]
-
? - l ? EK
? C[ − 1]
? - l ? EK
? C[]
Fig. 1. The Delayed CBC encryption mode.
adversaries can choose the message blocks according to the previously revealed ciphertext blocks so that they can force such a collision. This kind of adversaries are realistic if the output blocks are gradually released outside the cryptographic component. A simple countermeasure to prevent an adversary from having access to the previously ciphered block is to delay the output by one single block. Consequently, an attacker can no longer adapt the message blocks. More precisely, we slightly modify the encryption algorithm in such a way that the encryption module delays the output by one block, i.e., instead of outputting C[i] just after the introduction of M [i], C[i] is output after the introduction of M [i + 1]. This modification in the encryption process is efficient and does not require any modification of the scheme; ciphertexts produced by a device implementing the delayed CBC mode are compatible with those produced by standard ones. A detailed description for this scheme, called Delayed CBC or simply DCBC, is given below and is also depicted in figure 1. We assume that each block is numbered from 1 to and that the end of the encryption is indicated by sending a special block M [ + 1] = stop. If the decryption algorithm does not have to output a block, it sends, as an acknowledgment, a special block “Ack”. Of course, the index i is only given to simplify the description of the algorithm but in practice this counter should be handled by the encryption module. In other words, we do not consider attacks based on false values of i since they do not have any practical significance. In the following, EK (.) will be denoted by E(K, .). Function E − DCBCE (K, M [i], i) If i = 1, IV ← {0, 1}n , C[0] = IV Return C[0] Else If M [i] =stop Return C[i − 1] Else C[i] = E(K, C[i − 1] ⊕ M [i]) Return C[i − 1]
Function D − DCBCE (K, C[i], i) If i = 0, Return Ack Else Return C[i − 1] ⊕ E −1 (K, C[i])
368
Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard
Note that the decryption process is unchanged compared to the standard CBC encryption mode. Indeed, there is no need to delay the output block in the decryption phase since the adversary is not given any access to a decryption oracle for chosen plaintext attacks. Furthermore, since the DCBC does not provide chosen ciphertext security, for both the standard and the blockwise model, the decryption process does not need to be modified. Blockwise Security of the DCBC Encryption Mode. In appendix A, we analyze the security of the DCBC against blockwise concurrent adversaries mounting chosen plaintext attacks. Intuitively, it is easy to see that a blockwise adversary cannot adapt the plaintext blocks according to the previously returned ciphertext blocks since it does not know C[i − 1] when submitting M [i]. Furthermore, the knowledge of the previous blocks C[0], . . . , C[i − 2] does not help him to predict the i-th input C[i − 1] ⊕ M [i] of the block cipher as long as the total number µ of encrypted blocks with the same key K is not too large. The security proof shows that the advantage of an adversary is at most increased by a term µ2 /2n . In other words, DCBC is provably secure in the blockwise model, assuming the security of the underlying block cipher, while the total number of encrypted blocks with the same key is much smaller than 2n/2 . The security of the DCBC encryption mode is given in the following theorem: Theorem 1. Let P be a family of pseudorandom permutations of input and output length n where each permutation is indexed with a k-bit key. If E is drawn at random in the family P, then the DCBC encryption scheme is LORC-BCPA secure. Furthermore, for any t, q and µ ≥ 0, we have: Advlorc−bcpa (k, t, q, µ) ≤ 2 · Advprp DCBC P (k, t, µ) +
µ2 2n−1
It is important to notice that this security bound is similar to the one obtained in the standard model for the CBC mode [1]. This means that the delayed CBC is as secure in the blockwise model as the classical CBC encryption scheme in the standard model. 3.2
CFB Encryption Scheme
A review of the most classical modes of operation shows that one of them, the CFB mode [11], is naturally immune against blockwise attacks. Description. The CFB encryption mode is based on a function F , indexed by a key K, taking n-bit blocks as input and outputting n-bit blocks. This function F does not need to be a permutation, i.e., does not need to be implemented using a block cipher. For example the construction of Hall et al. [6], proved by Bellare and Impagliazzo in [2], can be used. In the following, FK (.) will be denoted by f (K, .). A detailed description for this scheme is given below and also depicted in figure 2, using the same conventions as for DCBC.
Practical Symmetric On-Line Encryption M [1]
IV
?
C[0]
FK
369
M [2]
? - l
-
?
FK
? - l
M []
-
-
FK
? - l
?
C[1]
?
C[2]
C[]
Fig. 2. The CFB encryption mode. Function E − CFBf (K, M [i], i) If i = 1, IV ← {0, 1}n , C[0] = IV C[1] = f (K, C[0]) ⊕ M [1] Return C[0] and C[1] Else C[i] = f (K, C[i − 1]) ⊕ M [i] Return C[i]
Function D − CFBf (K, C[i], i) If i = 0, Return Ack Else . Return C[i] ⊕ f (K, C[i − 1])
We insist on the fact that we have not modified the original CFB mode and that we only recall it in order to be complete. Blockwise Security of the CFB Encryption Mode. In appendix B, we analyze the security of the CFB against blockwise concurrent adversaries mounting chosen plaintext attacks. Intuitively, a blockwise adversary cannot adapt the plaintext blocks in order to force the input to the function f while the ciphertext blocks are all pairwise distinct. If no adaptive strategy is efficient, the inputs of f behave like random values and the system is secure until a collision at the output of this function occurs. If the total number µ of encrypted blocks with the same key K is not too large, i.e., much smaller than the square root of 2n , this event only happens with negligible probability. The security proof formalizes those ideas and shows that the advantage of an adversary is at most increased by a term µ2 /2n , as for DCBC. In other words, the CFB mode is provably secure in the blockwise model, assuming the security of the underlying block cipher (or function), while the total number of encrypted blocks with the same key is much smaller than 2n/2 . Theorem 2 (Security of the CFB mode of operation). Let F be a family of pseudorandom functions with input and output length n, where each function is indexed with a k-bit key. If the CF B encryption scheme is used with a function f chosen at random in the family F, then, for every integers t, q, µ ≥ 0, we have: Advlorc−bcpa (k, t, q, µ) ≤ 2 · Advprf CF B F (k, t, µ) +
µ2 2n−1
Such a bound is tight since practical attacks against the indistinguishability of the mode can be mounted if more than 2n/2 blocks are encrypted. In practice, notice that using 64-bit block ciphers such as DES or triple-DES, this bound of
370
Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard
232 blocks could be quickly reached in some applications based on high speed networks. A block cipher rather than a pseudorandom function can be used in the CFB mode as it is specified in [11]. Indeed, a secure block cipher behaves like a pseudorandom function up to the encryption of 2n/2 blocks.
References 1. M. Bellare, A. Desai, E. Jokipii, and P. Rogaway. A Concrete Security Treatment of Symmetric Encryption. In Proceedings of the 38th Symposium of Fundations of Computer Science. IEEE, 1997. 2. M. Bellare and R. Impagliazzo. A tool for obtaining tighter security analysis of pseudorandom function based constructions, with applications to PRP → PRF conversion. Manuscript available at http://www-cse.ucsd.edu/users/russell, February 1999. 3. R. Gennaro and P. Rohatgi. How to Sign Digital Streams. In B. Kaliski, editor, Advances in Cryptology – Crypto’97, volume 1294 of LNCS, pages 180 – 197. Springer-Verlag, 1997. 4. V.D. Gligor and P. Donescu. Fast Encryption and Authentication: XCBC and XECB Authentication Modes. In M. Matsui, editor, Fast Software Encryption 2001, volume 2355 of LNCS, pages 92 – 108. Springer-Verlag, 2001. 5. S. Goldwasser and S. Micali. Probabilistic Encryption. Journal of Computer and System Sciences, 28:270 – 299, 1984. 6. C. Hall, D. Wagner, J. Kelsey, and B. Schneier. Building PRFs from PRPs. In H. Krawczyk, editor, Advances in Cryptology – Crypto’98, volume 1462 of LNCS, pages 370 – 389. Springer-Verlag, 1998. 7. M. E. Hellman. A Cryptanalytic Time-Memory Trade-Off. IEEE Transactions on Information Theory, IT-26(4):401 – 406, 1980. 8. A. Joux, G. Martinet, and F. Valette. Blockwise-Adaptive Attackers. Revisiting the (in)security of some provably secure Encryption Modes: CBC, GEM, IACBC. In M. Yung, editor, Advances in Cryptology – Crypto’02, volume 2442 of LNCS, pages 17 – 30. Springer-Verlag, Berlin, 2002. 9. C. Jutla. Encryption modes with almost free message integrity. In B. Pfitzmann, editor, Advances in Cryptology – Eurocrypt’01, volume 2045 of LNCS, pages 529 – 544. Springer-Verlag, 2001. 10. A. Menezes, P. van Oorschot, and S. Vanstone. Handbook of Applied Cryptography. CRC Press, 1996. 11. NIST. FIPS PUB 81 - DES Modes of Operation, December 1980. 12. P. Rogaway, M. Bellare, J. Black, and T. Krovetz. OCB: A Block-Cipher Mode of Operation for Efficient Authenticated Encryption. In Eighth ACM conference on Computer and Communications Security. ACM Press, 2001.
A
Security Proof for the DCBC Encryption Scheme
We recall the following theorem giving the security bound for the DCBC encryption scheme, in the security model defined in section 3.1.
Practical Symmetric On-Line Encryption
371
Theorem 3. Let P be a family of pseudorandom permutations of input and output length n where each permutation is indexed with a k-bit key. If E is drawn at random in the family P, then the DCBC encryption scheme is LORC-BCPA secure. Furthermore, for any t, q and µ ≥ 0, we have: (k, t, q, µ) ≤ 2 · Advprp Advlorc−bcpa DCBC P (k, t, µ) +
µ2
2n−1 Proof. The proof goes by contradiction. Assume that there exists an adversary A against the DCBC encryption scheme with non-negligible advantage. From this adversary, we construct an attacker B that can distinguish the block cipher EK used in the DCBC, and randomly chosen in the family P, from a random permutation with non-negligible advantage. More precisely, the attacker B interacts with a permutation oracle that chooses a bit b and if b = 1, chooses f as a permutation in the set of all permutations Permn . Otherwise, if b = 0, it runs the key generation algorithm K(1k ), obtains a key K and sets f as EK . The goal of B is to guess the bit b with non-negligible advantage. To this end, B uses the adversary A and consequently B has to simulate the environment of the adversary A. First, B chooses a bit b at random and runs A. B has to concurrently answer the block encryption queries of the LORC game. When A submits pairs of input block (M0i [j], M1i [j]), B always encrypts the block Mbi [j] ⊕ Cbi [j − 1] under the DCBC encryption mode thanks to the permutation oracle, yielding Cbi [j], and returns Cbi [j − 1] to A. Finally, A will return a bit b and if b = b , then B returns b∗ = 0, otherwise, B returns b∗ = 1 to the oracle. The advantage of A in winning the LORC game is defined as: lorc−bcpa(b) (k) = · Pr[Expt (k) = 1] − 1 Advlorc−bcpa 2 DCBC,A DCBC,A = 2 · Pr[b = b |K ← K(1k ), f = EK ] − 1 It is easy to verify that the attacker B can simulate the concurrent lorencryption oracle to adversary A since B has access to a permutation f and B can simulate the encryption mode of DCBC. The advantage for B in winning his game is defined as: ∗ ∗ Advprp P,B (k) = | Pr[b = 0|b = 0] − Pr[b = 0|b = 1]|
= | Pr[b = b |b = 0] − Pr[b = b |b = 1]| = Pr[b = b |K ← K(1k ), f = EK ] − Pr[b = b |f ← Permn ] 1 + Advlorc−bcpa DCBC,A (k)
− Pr[b = b |f ← Permn ] 2 Let us now analyze Pr[b = b |f ← Permn ]. We denote by D the event that all the inputs on the f permutation are distinct. Thus we have: ≥
Pr[b = b |f ← Permn ] = Pr[b = b |f ← Permn ∧ D] · Pr[D] ¯ · Pr[D] ¯ + Pr[b = b |f ← Permn ∧ D] ¯ + 1 − 1 · Pr[D] ¯ = 1/2 · 1 − Pr[D] 2n
372
Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard
This last equation comes from the fact that if f is a permutation chosen at random from the set of all permutations and no collision occurs, outputs of f are independent of the input blocks M0i [j] and M1i [j] and the adversary A has no advantage in winning the LORC game. Therefore, Pr[b = b |f ← Permn ∧D] = 12 . Otherwise, if a collision occurs, there exists i, i , j, j such that (i, j) = (i , j ) and Cbi [j] = Cbi [j ], and then since A knows all the plaintexts blocks (M0i , M1i ) and the corresponding ciphertext blocks Cbi , he can decide whether M0i [j]⊕M0i [j ] = Cbi [j −1]⊕Cbi [j −1] or whether M1i [j]⊕M1i [j ] = Cbi [j −1]⊕Cbi [j −1]. However, with probability 1/2n , we have M0i [j] ⊕ M0i [j ] = M1i [j] ⊕ M1i [j ] if (M0i , M1i ) are chosen at random. Thus in any wins his game in this case and we have way A ¯ ≤ 1 − 1n . So, we get: Pr[b = b |f ← Permn ∧ D] 2 1 1 1 ¯ − n · Pr[D] Pr[b = b |f ← Permn ] ≤ + 2 2 2 Now, let us bound the probability that a collision occurs. The following ¯ ≤ µ(µ−1) lemma shows that if µ is the number of encrypted blocks, then Pr[D] 2n−1 . Consequently, the advantage of the attacker B is related to the advantage of the adversary A: 1 + Advlorc−bcpa 1 1 1 DCBC,A (k) prp ¯ − + − · Pr[D] AdvP,B (k) ≥ 2 2 2 2n Advlorc−bcpa 1 1 DCBC,A (k) ¯ − − · Pr[D] ≥ 2 2 2n Consequently, we obtain
1 prp ¯ Advlorc−bcpa · Pr[D] (k) ≤ 2 · Adv (k) + 1 − DCBC,A P,B 2n−1 µ(µ − 1) 1 ≤ 2 · Advprp · (k) + 1 − P,B n−1 2 2n−1
and the theorem follows. To conclude the proof, we have to prove the following lemma. ¯ ≤ Lemma 1. Pr[D]
µ(µ−1) 2n−1 .
¯ = Pr[Collµ ] where Collµ denotes the event that a Proof. We note that Pr[D] collision occurs on the input of the function f during the encryption of the µ blocks. Consequently, Pr[Collµ ] = Pr[Collµ ∧ Collµ−1 ] + Pr[Collµ ∧ Collµ−1 ] = Pr[Collµ |Collµ−1 ] · Pr[Collµ−1 ] + Pr[Collµ−1 ] ≤ Pr[Collµ |Collµ−1 ] + Pr[Collµ−1 ] ≤
k=µ k=1
Pr[Collk |Collk−1 ]
Practical Symmetric On-Line Encryption
373
We now prove that Pr[Collk |Collk−1 ] = 2n2(k−1) −(k−1) . This represents the probability that a collision occurs in the input of the function f at the kth block given that no collision appeared before. We have Pr[Collk ∧ Collk−1 ] = 2(k−1) 2n since there is (k − 1) choices of picking one out of the 2(k − 1) previous different values of Mbi [j] ⊕ C i [j − 1] (as no collision occurs before the (k − 1)th step). The factor 2 comes from the fact that there are two messages M0 and M1 . Thus, if a collision occurs for one of them, the adversary wins the game. The adversary cannot force a collision in the kth block: indeed, he does not know the output of the (k − 1)th block and this output of the function f is independent of the (k − 1)th input known by the adversary. Furthermore, there are 2n different values of M i [j] ⊕ C i [j − 1]. n We also have Pr[Collk−1 ] = 2 −(k−1) since there are 2n −(k−1) different val2n i i n ues for M [j]⊕C [j −1] out of the 2 choices (f is a permutation). Consequently, for k = 1, . . . , µ, we get: Pr[Collk |Collk−1 ] =
2(k − 1)/2n 2(k − 1) =2· n − (k − 1)]/2n 2 − (k − 1)
[2n
Thus, if µ ≤ 2n−1 , Pr[Collµ ] ≤
k=µ
Pr[Collk |Collk−1 ] =
k=1
≤
k=µ−1 k=0
k=µ k=1
2k = 2n − 2n−1
k=µ−1 k=0
k=µ−1 2(k − 1) 2k = n 2 − (k − 1) 2n − k k=0
2k µ(µ − 1) = 2n−1 2n−1
and the lemma is proved.
B
Security Proof for the CFB Encryption Mode
The following theorem gives the security bound for the CFB encryption scheme against concurrent blockwise adaptive adversaries. Theorem 4 (Security of the CFB mode of operation). Let F be a family of pseudorandom functions with input and output length n, where each function is indexed with a k-bit key. If the CF B encryption scheme is used with a function f chosen at random in the family F, then, for every integers t, q, µ ≥ 0, we have: (k, t, q, µ) ≤ 2 · Advprf Advlorc−bcpa CF B F (k, t, µ) +
µ2 2n−1
Proof. We consider an adversary A against the CFB mode, trying to win the LORC-BCPA security game. We show that this adversary can be turned into an adversary B trying to distinguish the function FK from a random function chosen in Rn→n . The attack scenario for A is as defined in section 2.2. B has to simulate for the environment of A, by using his own oracle. Indeed, B has access
374
Pierre-Alain Fouque, Gwena¨elle Martinet, and Guillaume Poupard
to an oracle Of , defined as follows: in the beginning of the game, Of picks at random a bit b. If b = 0 then he chooses at random a key K for the function F ∈ F and lets f = FK . Otherwise, if b = 1, then f is a random function chosen in the set Rn→n of all the function from {0, 1}n into {0, 1}n . B has to guess with non negligible advantage the bit b. We now precisely describe how the adversary B answers the encryption queries made by A. First, B picks at random a bit b . A feeds his encryption oracle with queries of the form (M0i [j], M1i [j]), where Mbi [j] is the j-th block of the i-th query. Note that queries can be interleaved, so that some of the previous queries are not necessarily finished at this step. When B receives such a query and if j = 1, then B picks at random a value Ri , sends it to Of and receives f (Ri ). If j = 1, then B transmits Cbi [j − 1] to Of and receives f (Cbi [j − 1]). Finally, B returns Cbi [j] = Mbi [j] ⊕ f (Cbi [j − 1]) or Ri along with Cbi [1] = Mbi [1] ⊕ f (Ri ) to A, according to the value j. At the end of the game, A returns a bit b representing its guess for the bit b . Then, B also outputs a bit b∗ representing his guess for the bit b chosen by Of and such that b∗ = 0 if b = b , and b∗ = 1 otherwise. We have to evaluate Advprf F (k). We have: ∗ ∗ Advprf F (k) = | Pr[b = 0|b = 0] − Pr[b = 0|b = 1]| = | Pr[b = b |f ← F] − Pr[b = b |f ← Rn→n ]|
≥
1 + Advlorc−bcpa CF B,A (k) 2
− Pr[b = b |f ← Rn→n ]
(1)
prf n→n ] − 1 and it Thus, Advlorc−bcpa CF B,A (k) ≤ 2 · AdvF (k) + 2 · Pr[b = b |f ← R remains to upperbound Pr[b = b |f ← Rn→n ]. As for the previous proof for the security of the DCBC encryption scheme, we will look at the collisions that can occur in the inputs of the function f . Indeed, if no such collision appears, then the advantage for the adversary A in winning his game equals 0. However if such a collision occurs then the adversary can easily detect it and consequently he can adapt the following plaintext block, to distinguish which of the messages is encrypted. Thus, in this case, the adversary wins the game. We denote by Coll the event that some collision appears on the inputs of the function f . So we have:
Pr[b = b |f ← Rn→n ] = Pr[b = b |f ← Rn→n ∧ Coll] · Pr[Coll] + Pr[b = b |f ← Rn→n ∧ Coll] · Pr[Coll] ≤ Pr[Coll] + Pr[b = b |f ← Rn→n ∧ Coll] 1 ≤ Pr[Coll] + 2
(2)
The last inequality come from the fact that if no collision occurs on the input of the function f , where f is a function chosen at random in Rn→n , then the outputs of this function are random values, uniformly distributed in {0, 1}n and independent of the previous values. Thus, the adversary cannot adapt the following message block, according to the previous ciphertext blocks. Thus, the random guess is the unique strategy for him to guess the bit b .
Practical Symmetric On-Line Encryption
375
We have now to evaluate Pr[Coll]. As before, we denote by Collk the probability that a collision occurs on the (k − 1)th input of the function f . We have: Pr[Collk ] = Pr[∃ 0 ≤ < k s.t. Cbi [] = Cbi [k]], where Cbi [0] = Ri . Thus, we have: µ Pr[Coll] = Pr[Collk |Collk−1 ] k=1
For sake of clarity, in the following we omit the bit b and the index i representing the number of the queries. We remark that: C[] = C[k] if and only if M [] ⊕ f (C[ − 1]) = M [k] ⊕ f (C[k − 1]). This last equation can be verified either at random, or if the adversary can choose M [k] so that M [k] = M [] ⊕ f (C[ − 1]) ⊕ f (C[k − 1]). However, since by assumption C[k − 1] does not collide with any of the previous ciphertext block, f (C[k − 1]) has never been computed and is then a random value, uniformly distributed in {0, 1}n and independent of the previous computed values. Thus, the adversary cannot guess it to adapt M [k] accordingly, except with negligible probability. Finally, we can write that for all 1 ≤ k ≤ µ: Pr[∃ 0 ≤ < k s.t. C[] = C[k] | Collk−1 ] ≤ 2 · k−1 2n . Indeed, there is at most k − 1 choices for the value , and two messages are queried. Thus, by summing up all the values k, we have: Pr[Coll] ≤
µ2 2n−1
Finally, by replacing all the probabilities involved in equations 1 and 2, we obtain: Advprf F (k, t, µ) ≥ and the theorem follows.
Advlorc−bcpa CF B,A (k, t, q, µ) 2
−
µ2 2n−1
The Security of “One-Block-to-Many” Modes of Operation Henri Gilbert France T´el´ecom R&D
[email protected] Abstract. In this paper, we investigate the security, in the Luby-Rackoff security paradigm, of blockcipher modes of operation allowing to expand a one-block input into a longer t-block output under the control of a secret key K. Such “one-block-to-many” modes of operation are of frequent use in cryptology. They can be used for stream cipher encryption purposes, and for authentication and key distribution purposes in contexts such as mobile communications. We show that although the expansion functions resulting from modes of operation of blockciphers such as the counter mode or the output feedback mode are not pseudorandom, slight modifications of these two modes provide pseudorandom expansion functions. The main result of this paper is a detailed proof, in the Luby-Rackoff security model, that the expansion function used in the construction of the third generation mobile (UMTS) example authentication and key agreement algorithm MILENAGE is pseudorandom.
1
Introduction
In this paper, we investigate the security of modes of operation of blockciphers allowing to construct a length increasing function, i.e. to expand a 1-block input value x into a longer t-block output (z1 , z2 , . . . , zt ) (where t ≥ 2), under the control of a secret key K. Such length increasing modes of operation of blockciphers associated with a one block to t blocks expansion function are of extremely frequent use in cryptology, mainly for pseudo-random generation purposes. They can be considered as a kind of dual of length decreasing modes of operation associated with a t blocks to one block compression function used for message authentication purpose (e.g. CBC MAC). In both cases, the essential security requirement is that the resulting one block to t blocks (respectively t blocks to one block) function be pseudorandom, i.e. (informally speaking) indistiguishable, by any reasonable adversary, from a perfect random function with the same input and output sizes. Thus the Luby and Rackoff security paradigm [LR88], which allows to relate the pseudo-randomness of a function resulting from a cryptographic construction to the pseudorandomness of the elementary function(s) encountered at the lower level of the same construction, represents a suitable tool for analysing the security of both kinds of modes of operation. However, the security and the efficiency of length increasing modes of operation have been much less investigated so far than the one of length decreasing modes of operation such as CBC MAC T. Johansson (Ed.): FSE 2003, LNCS 2887, pp. 376–395, 2003. c International Association for Cryptologic Research 2003
The Security of “One-Block-to-Many” Modes of Operation
377
[BKR94,PR00], R-MAC [JJV02], etc., or than constructions of length-preserving functions or permutations such as the Feistel scheme [LR88,Pa91]. The practical significance of length increasing modes of operation of blockciphers comes from the fact that they provide the two following kinds of pseudorandom generation functions, which both represent essential ingredients for applications such as mobile communications security. Example 1: Stream cipher modes of operation of blockciphers. It has become usual for stream ciphers (whether they are derived or not from a mode of operation of a blockcipher) to require that the generated pseudo-random sequences used to encrypt data be not only dependent upon a secret key, but also upon an additional (non secret) input value x, sometimes referred to as an initialization vector or as an initial value (IV). This holds for most recently proposed stream ciphers, e.g. SEAL [RC98], SCREAM [HCCJ02], SNOW [EJ02], BGML [HN00], and for the stream cipher mode of operation of the KASUMI blockcipher used in the third generation mobile system UMTS [Ka00]. As a consequence, stream ciphers are more conveniently modelled as a length increasing pseudo-random function FK : {0, 1}n → {0, 1}nt ; x → FK (x) = (z1 , z2 , · · · , zt ) than as a mere pseudo-random numbers generator allowing to derive a pseudorandom sequence (z1 , z2 , · · · , zt ) of nt bits from a secret seed K. The advantage of modelling a stream cipher as a length increasing function generator rather than as a numbers generator is that it allows to reflect the security conditions on the dependance of the pseudo-random sequence in the input value, by requiring that FK be a pseudo-random function, indistinguishable from a perfect random function with the same input and output sizes by any reasonable adversary. Example 2: Combined authentication and key distribution. In mobile communication systems (GSM, UMTS, etc.) and more generally in most secret key security architectures where authentication and encryption are provided, protected communications are initiated with a kind of “handshake” where authentication or mutual authentication between the user’s device and the network and session key(s) distribution are performed. Such an initial handshake is followed by a protected communication, where the session key(s) resulting from the handshake phase are used to encrypt and/or to authenticate the data exchanges. In order for the handshake protocol not to delay the actual protected communication phase, it is essential to restrict it to two passes and to minimize the amount of data exchanged. For that purpose one of the parties (typically the network in the case of mobile communications) sends a random challenge (accompanied by additional data such as a message authenticated counter value if mutual authentication is needed), and this random challenge serves as an input to a secret key function allowing to derive an authentication response and one or several session key(s). In recent mobile communication systems such as UMTS, the length of the outputs to be produced (measured in 128-bit blocks) far exceeds the 1-block length of the random challenge. Thus, one single operation of a blockcipher does not suffice to produce the various outputs needed. In order to base the security of the cryptologic computations performed during the handshake upon the security
378
Henri Gilbert
of a trusted blockcipher, a suitable one-block-to-many mode of operation of the underlying blockcipher has to be defined. The security requirements are not only that each of the output blocks be unpredictable by an adversary. In addition, the information on one subset of the outputs (say for instance an authentication response) should not help an adversary to derive any information about the rest of the outputs (say for instance the session key used to encrypt the subsequent exchanges). These various security requirements can be again reflected, as in the example of stream cipher modes of operation, in saying that the one to t blocks function FK : {0, 1}n → {0, 1}n.t ; x → FK (x) = (z1 , z2 , · · · , zt ) used to derive the various output values must be indistiguishable from a perfect random function with the same input and output sizes. In this paper, we show that although the one block to t blocks functions associated with well known modes of operation of blockciphers such as the Output Feedback mode (OFB) or the so-called Counter mode are not pseudorandom, slightly modified modes of operation in which the one-block input is first “prewhitened” before being subject to an expansion process are pseudorandom in a formally provable manner. The main result of this paper is a detailed pseudorandomness proof, in the Luby and Rackoff security model, for the one to t blocks mode of operation of a blockcipher used in the UMTS example authentication and key distribution algorithm MILENAGE [Mi00], which can be considered as a modified counter mode. We also provide pseudorandomness proofs for a modified version of the OFB mode. Related work. The study of pseudorandomness properties of cryptographic constructions initiated Luby and Rackoff’s seminal paper [LR88] has represented a very active research area for the last decade. In particular, Patarin clarified the link between the best advantage of a q-queries distinguisher and the q-ary transition probabilities associated with f and proved indistinguihability bounds for numerous r-round Feistel constructions [Pa91], Maurer showed how to generalise indistinguishability results related to perfect random functions to indistinguishability results related to nearly perfect random functions [Ma92], Bellare, Kilian, Rogaway [BKR94], and later on several other authors [PR00,JJV02,BR00] investigated the application of similar techniques to various message authentication modes of operation, Vaudenay embedded techniques for deriving indistinguishability bounds into a broader framework named the decorrelation theory [Va98,Va99]. In this paper, we apply general indistinguishability proof techniques due to Patarin [Pa91] in an essential manner. Our approach to expansion functions constructions based on blockcipher modes of operation has some connections, but also significant differences, with the following recently proposed blockcipher based expansion function constructions: – in [DHY02], Desai, Hevia and Yin provide security proofs, in the Luby-Rackoff paradigm, for the ANSI X9.17 pseudo random sequences generation mode of operation of a blockcipher, and for an improved version of this mode which is essentially the same as the modified OFB mode considered in this paper. However, the security model considered in [DHY02] is quite distinct (and somewhat
The Security of “One-Block-to-Many” Modes of Operation
379
complementary): we consider the pseudorandomness properties of the one to t blocks expansion function resulting from the considered mode of operation, whereas [DHY02] models a PRG mode of operation as the iteration a “smaller” keyed state transition and keystream output function, and consider the pseudorandomness properties of such state transition functions. – in [HN00], Hastad and N¨ aslund propose a pseudorandom numbers generator named BMGL. BGML is based on a “key feedback” mode of operation of a blockcipher. The security paradigm underlying BMGL (namely the indistinguishability of pseudorandom numbers sequences from truly random sequences, based upon a combination of the Blum-Micali PRG construction [BM84] and a variant of the Goldreich Levin hard core bits construction [GL89], in which the conjectured onewayness of the key dependance of the blockcipher is used to construct PR sequences of numbers) is quite different from the one considered here (namely the indistinguishability of the constructed expansion function from a perfect random function, assuming that the underlying blockcipher is indistinguishable from a perfect random one-block permutation). The advantage of the BGML approach it that it relies upon less demanding security assumptions for the underlying blockcipher than in our approach, but the disadvantage is that it leads to less efficient constructions in terms of the number of blockcipher invocations per output block. – in [BDJR97], Bellare, Desai, Jokipii and Rogaway provide security proofs for stream cipher modes of operation, namely the XOR scheme and a stateful variant named CTR schemes. These two modes have some connections with the insecure one block to t blocks mode of operation referred to as the counter mode in this paper. However, a major difference between these modes is that in the XOR and CTR schemes, and adversary has no control at all of the inputs to the underlying blockcipher f (she can only control the plaintext), whereas in all the one to many blocks modes we consider in this paper, an adversary can control the one-block input value. Thus, there is no contradiction between the facts that the XOR and CTR encryption schemes are shown to be secure in [BDJR97] and that the counter mode of operation can easily be shown to be totally insecure. This paper is organized as follows: Section 2 introduces basic definitions and results on random functions and security proof techniques in the LubyRackoff security model. Section 3 describes various “one-block-to-many” modes of operation of blockciphers, and introduces a modified variant of the counter mode used in MILENAGE and an improved variant of the OFB mode. Sections 4 and 5 present pseudorandomness proofs for the two latter modes.
2 2.1
Security Framework The Luby-Rackoff Security Paradigm
A key dependent cryptographic function such as a blockcipher or a mode of operation of a blockcipher can be viewed as a random function associated with a randomly selected key value. It is generally defined using a recursive construction
380
Henri Gilbert
process. Each step of the recursion consists of deriving a random function (or permutation) F from r previously defined random functions (or permutations) f1 , · · · , fr , and can be represented by a relation of the form F = Φ(f1 , · · · , fr ). One of the strongest security requirement one can put on such a random function or permutation F is that F be impossible to distinguish with a non negligible success probability from a perfect random function or permutation F ∗ uniformly drawn from the set of all functions (or permutations) with the same input and output sizes, even if a probabilistic testing algorithm A of unlimited power is used for that purpose and if the number q of adaptively chosen queries of A to the random instance of F or F ∗ to be tested is large. It is generally not possible to prove indistiguishability properties for “real life” cryptologic random functions and large numbers of queries, because this would require a far too long key length. However, it is often possible to prove or disprove that if a random function F encountered at a given level of a cryptologic function construction is related to random functions encountered at the lower recursion level by a relation of the form f = Φ(f1 , · · · , fr ), then if we replace the actual f1 to fr random functions of the cipher by independent perfect random functions or permutations f1∗ to fr∗ (or, in a more sophisticated version of the same approach, by f1 to fr functions which are sufficiently indistinguishable from f1∗ to fr∗ ), then the resulting modified random function F is indistinguishable from a random function (or permutation). This provides a useful method for assessing the soundness of blockcipher constructions. For instance, in the case of a three-round Feistel construction, a well known theorem first proved by Luby and Rackoff [LR88] provides upper bounds on the |p−p∗ | advantage of any testing algorithm A in distinguishing the 2n-bit random permutation F = Ψ (f1∗ , f2∗ , f3∗ ) deduced from three independent perfect random functions f1∗ , f2∗ and f3∗ from a perfect random 2n-bit permutation F ∗ with q adaptively chosen queries to the tested instance of F or F ∗ . This advantage is less 2 than 2qn . Another example is for the F = ΦCBCM AC (f ) CBC-MAC construction allowing to derive a tn-bit to n-bit message authentication function from chained invocations of a an n-bit to n-bit function f . It was shown by Bellare, Kilian and Rogaway in [BKR94] that if q 2 t2 ≤ 2n+1 , then the advantage of any testing algorithm A in distinguishing the random function F = ΦCBCM AC (f ∗ ) derived from a perfect nt -bit to n-bit random function using q adaptively chosen queries 2 2 t . is less than 3 2qn+1 In this paper, we will consider constructions of the form F = Φ(f ), allowing to derive a n-bit to nt-bit function from several invocations of the same instance of an n-bit permutation f , representing a blockcipher of blocksize n. We will show that for suitable modes of operation Φ, the random function F = Φ(f ∗ ) derived from a perfect n-bit random permutation is indistinguishable from a perfect n-bit to nt-bit random function F ∗ .
The Security of “One-Block-to-Many” Modes of Operation
2.2
381
Random Functions
Through the rest of this paper we are using the following notation: – In denotes the set {0, 1}n n – Fn,m denotes the set In Im of functions from In into Im . Thus |Fn,m | = 2m.2 – Pn denotes the set of permutations on In . Thus |Pn | = 2n !. A random function of Fn,m is defined as a random variable F of Fn,m , and can be viewed as a probability distribution (P r[F = ϕ])ϕ∈Fn,m over Fn,m , or equivalently as a family (Fω )ω∈Ω of Fn,m elements. In particular: – A n-bit to m-bit key dependent cryptographic function is determined by a randomly selected key value K ∈ K, and can thus be represented by the random function F = (fK )K∈K of Fn,m . – A cryptographic construction of the form F = Φ(f1 , f2 , · · · , fr ) can be viewed as a random function of Fn,m determined by r random functions fi ∈ Fni ,mi , i = 1 · · · r. Definition 1. We define a perfect random function F ∗ of Fn,m as a uniformly drawn element of Fn,m . In other words, F ∗ is associated with the uniform probability distribution over Fn,m . We define a perfect random permutation f ∗ on In as a uniformly drawn element of Pn . In other words, f ∗ is associated with the uniform probability distribution over Pn . Definition 2. (q-ary transition probabilities associated to F ). Given a random F function F of Fn,m , we define the transition probability P r[x → y] associated with a q-tuple x of In inputs and a q-tuple y of Im outputs as F P r[x → y] = P r[F (x1 ) = y 1 ∧ F (x2 ) = y 2 ∧ ... ∧ F (xq ) = y q ] = P rω∈Ω [Fω (x1 ) = y 1 ∧ Fω (x2 ) = y 2 ∧ ... ∧ Fω (xq ) = y q ] In the sequel we will use the following simple properties: Property 1. Let f ∗ be a perfect random permutation on In . If x = (x1 , ..., xq ) is a q-tuple of pairwise distinct In values and y = (y 1 , ..., y q ) is a q-tuple of f∗
pairwise distinct In values then P r[x → y] = (|In | − q)!/|In |! =
(2n −q)! (2n )!
Property 2. Let f ∗ be a perfect random permutation on In . If x and x are two distinct elements of In and δ is any fixed value of In , then Pr[f ∗ (x) ⊕ f ∗ (x ) = δ] ≤ 22n . Proof: Pr[f ∗ (x) ⊕ f ∗ (x ) = 0] = 0 since x = x . If δ = 0, Pr[f ∗ (x) ⊕ f ∗ (x ) = n n−2 δ] = 2 ·22n ! ···1 = 2n1−1 ≤ 22n . So, Pr[f ∗ (x) ⊕ f ∗ (x ) = δ] ≤ 22n . 2.3
Distinguishing Two Random Functions
In proofs of security such as the one presented in this paper, we want to upper bound the probability of any algorithm to distinguish whether a given fixed ϕ
382
Henri Gilbert
function is an instance of a F = Φ(f1∗ , f2∗ , .., fr∗ ) random function of Fn,m or an instance of the perfect random function F ∗ , using less than q queries to ϕ. Let A be any distinguishing algorithm of unlimited power that, when input with a ϕ function of Fn,m (which can be modelled as an “oracle tape” in the probabilistic Turing Machine associated with A) selects a fixed number q of distinct chosen or adaptively chosen input values xi (the queries), obtains the q corresponding output values y i = F (xi ), and based on these results outputs 0 or 1. Denote by p (resp by p∗ ) the probability for A to answer 1 when applied to a random instance of F (resp of F ∗ ). We want to find upper bounds on the advantage AdvA (F, F ∗ ) = |p − p∗ | of A in distinguishing F from F ∗ with q queries. As first noticed by Patarin [Pa91], the best advantage AdvA (F, F ∗ ) of any distinguishing algorithm A in distinguishing F from F ∗ is entirely determined F by the q-ary transition probabilities P r[x → y] associated with each x = 1 q (x , · · · , x ) q-tuple of pairwise distinct In values and each y = (y 1 , · · · , y q ) q-tuple of Im values. The following Theorem, which was first proved in [Pa91] and an equivalent version of which is stated in [Va99], is a very useful tool for F deriving upper bounds on AdvA (F, F ∗ ) based on properties of the P r[x → y] q-ary transition probabilities. Theorem 1. Let F be a random function of Fn,m and F ∗ be a perfect random function representing a uniformly drawn random element of Fn,m . Let q be an integer. Denote by X the subset of In q containing all the q-tuples x = (x1 , · · · , xq ) of pairwise distinct elements. If there exists a subset Y of Im q and two positive real numbers 1 and 2 such that q (i) 1) |Y | ≥ (1 − 1 ) · |Im | F 2) ∀x ∈ X∀y ∈ Y P r[x → y] ≥ (1 − 2 ) · |Im1 |q (ii) then for any A distinguishing algorithm using q queries AdvA (F, F ∗ ) ≤ 1 + 2 . In order to improve the selfreadability of this paper, a short proof of Theorem 1, which structure is close to the one of the proof given in [Pa91], is provided in appendix at the end of this paper.
3
Description of Length Increasing Modes of Operation of Blockciphers
We now describe a few natural length increasing modes of operation of a blockcipher. Let us denote the blocksize (in bits) by n, and let us denote by t a fixed integer such that t ≥ 2. The purpose of one to t blocks modes of operation is to derive an n-bit to tn-bit random function F from an n-bit to tn-bit random function f (representing a blockcipher associated with a random key value K) in such a way that F be indistinguishable from a perfect n-bit to tn bit random function if f is indistinguishable from a perfect random permutation f ∗ . We
The Security of “One-Block-to-Many” Modes of Operation
383
show that the functions associated with the well known OFB mode and with the so-called counter mode of operation are not pseudorandom and introduce enhanced modes of operation, in particular the variant of the counter mode encountered in the UMTS example authentication and key distribution algorithm MILENAGE. 3.1
The Expansion Functions Associated with the Counter and OFB Modes of Operation Are Not Pseudorandom
Definition 3. Given any t fixed distinct one-block values c1 , · · · , ct ∈ {0, 1}n and any random permutation f over {0, 1}n , the one block to t blocks function FCN T associated with the Counter mode of operation of f is defined as follows: FCN T (f ) : {0, 1}n → {0, 1}nt
x → (z1 , · · · , zt ) = (f (x ⊕ c1 ), · · · , f (x ⊕ ct ))
Given any random permutation f over {0, 1}n , the 1 block to t blocks function FOF B associated with the output feedback mode of operation of f is defined as follows: FOF B (f ) : {0, 1}n → {0, 1}nt x → (z1 , · · · , zt ) where the zi are recursively given by z1 = f (x); z2 = f (z1 ); · · · ; zt = f (zt−1 )
Fig. 1. The counter and OFB modes of operation.
It is straightforward that FCN T and FOF B are not a pseudorandom. As a matter of fact, let us consider the case where FCN T and FOF B are derived
384
Henri Gilbert
from a perfect random permutation f ∗ . Let x denote any arbitrary value of {0, 1}n , and (z1 , · · · , zt ) denote the FCN T (x) value. For any fixed pair (i, j) of distinct elements of {1, 2, .., t}, let us denote by (z1 , · · · , zt ) the FCN T output value corresponding to the modified input value x = x ⊕ ci ⊕ cj . The obvious property that zi = zj and zj = zi provides a distinguisher of FCN T from a perfect one block to t-blocks random function F ∗ which requires only two oracle queries. Similarly, to proof that FOF B is not pseudorandom, let us denote by x and (z1 , · · · , zt ) any arbitrary value of {0, 1}n and the FCN T (x) value. With an overwhelming probability, f ∗ (x) = x, so that z1 = x. Let us denote by x the modified input value given by x = z1 , and by (z1 , · · · , zt ) the corresponding FOF B output value. It directly follows from the definition of FOF B that for i = 1, · · · , t − 1, zi = zi+1 . This provides a distinguisher of FOF B from a perfect one block to t-blocks random function F ∗ which requires only two oracle queries. The above distinguishers indeed represent serious weaknesses in operational contexts where the input value of FCN T or FOF B can be controlled by an adversary. For instance if FCN T or FOF B is used for authentication and key distribution purposes, these distinguishers result in a lack of cryptographic separation between the output values zi . For certain pairs (i, j) of distinct {1, · · · , t} values, an adversary knows how to modify the input x to the data expansion function in order for the i-th output corresponding to the modified input value x , which may for instance represent a publicly available authentication response), to provide her with the j-th output corresponding to the input value x, which may for instance represent an encryption key. 3.2
Modified Counter Mode: The MILENAGE Construction
Figure 2 represents the example UMTS authentication and key distribution algorithm MILENAGE [Mi00]. Its overall structure consists of 6 invocations of a 128-bit blockcipher EK , e.g. AES associated with a 128-bit subscriber key K. In Figure 2, c0 to c4 represent constant 128-bit values, and r0 to r5 represent rotation amounts (comprised between 0 and 127) of left circular shifts applied to intermediate 128-bit words. OPC represents a 128-bit auxiliary (operator customisation) key. MILENAGE allows to derive four output blocks z1 to z4 (which respectively provide an authentication response, an encryption key, a message authentication key, and a one-time key used for masking plaintext data contained in the authentication exchange) from an input block x representing a random authentication challenge. It also allows to derive a message authentication tag z0 from the x challenge and a 64-bit input word y (which contains an authentication sequence number and some additional authentication management data) using a close variant of the CBC MAC mode of EK . The security of the MAC function providing z0 , the independence between z0 and the other output values are outside of the scope of this paper. Some analysis of these features can be found in the MILENAGE design and evaluation report [Mi00]. Let us also ignore the involvement of the OPc constant, and let us focus on the structure of the one block to t block construction allowing to derive the output blocks z1 to z4 from
The Security of “One-Block-to-Many” Modes of Operation
385
Fig. 2. Milenage.
the input block x . This construction consists of a prewhitening computation, using EK , of an intermediate block y, followed by applying to y a slight variant (involving some circular rotations) of the counter mode construction. More formally, given any random permutation f over {0, 1}n , the 1 block to t blocks function FM IL (f ) associated with the MILENAGE construction is defined as follows (cf Figure 3): FM IL (f ) : {0, 1}n → {0, 1}nt
x → (z1 , · · · , zt )
where zk = f (rot(f (x), rk ) ⊕ ck ) for k = 1 to t A detailed statement and proof of the pseudorandomness of the MILENAGE construction are given in Theorem 2 in the next Section. Theorem 2 confirms, with slightly tighter indistinguishability bounds, the claim concerning the pseudorandomness of this construction stated (without the underlying proof) in the MILENAGE design and evaluation report [Mi00]. 3.3
Modified OFB Construction
Figure 4 represents a one block to t blocks mode of operation of an n-bit permutation f which structure consists of a prewhitening computation of f providing an intermediate value y, followed by an OFB expansion of y. More formally, the FM OF B (f ) expansion function associated with the modified OFB construction of Figure 4 is defined as follows: FM OF B (f ) : {0, 1}n → {0, 1}nt
x → (z1 , · · · , zt )
386
Henri Gilbert
Fig. 3. The MILENAGE modified counter mode construction.
where z1 = f ((f (x)) and zk = f (f (x) ⊕ zk−1 ) for k = 2 to t A short proof of the pseudorandomness of this modified OFB construction is given in Section 5 hereafter. It is worth noticing that the construction of the above modified OFB mode operation is identical to the one of the ANSI X9.17 PRG mode of operation introduced by Desai et al in [DHY02], so that the pseudorandomness proof (related the associated expansion function) provided in Section 5 is to some extent complementary to the pseudorandomness proof (related to the the associated state transition function) established in [DHY02]. The modified OFB mode of operation is also similar to the keystream generation mode of operation of the KASUMI blockcipher used in the UMTS encryption function f8 [Ka00], up to the fact that in the f8 mode, two additional precautions are taken: the key used in the prewhitening computation differs from the one in the rest of the computations, and in order to prevent collisions between two output blocks from resulting in short cycles in the produced keystream sequence, a mixture of the OFB and counter techniques is applied.
4
Analysis of the Modified Counter Mode Used in MILENAGE
In this Section we proof that if some conditions on the constants ck , k ∈ {1 · · · t} and rk , k ∈ {1 · · · t} encountered in the MILENAGE construction of Section 3 are satisfied, then the one block to t blocks expansion function FM IL (f ∗) resulting from applying this construction to the perfect random one-block permutation f ∗
The Security of “One-Block-to-Many” Modes of Operation
387
Fig. 4. The modified OFB mode of operation.
is indistinguishable from a perfect random function of Fn,tn , even if the product of t and the number of queries q is large. In order to formulate conditions on the constants ck and rk , we need to introduce some notation: – the left circular rotations of a n-bit word w by r bits is denoted by rot(w, r). Rotation amounts (parameter r ) are implicitly taken modulo n. – for any GF (2)-linear function L : {0, 1}n → {0, 1}n , Ker(L) and Im(L) respectively denote the kernel and image vector spaces of L. With the above notation, these conditions can be expressed as follows: / Im(L) ∀k, l ∈ {1 · · · t}k = l ⇒ (ck ⊕ cl ) ∈ where L = rot(., rk ) ⊕ rot(., rl )
(C)
The purpose of the above condition is to ensure that for any y ∈ {0, 1}n and any two distinct integers k and l ∈ {1 · · · t}, the values rot(y, rk ) ⊕ ck and rot(y, rl ) ⊕ cl be distinct. If t is less than 2n , it is easy to find constants ck and rk satisfying condition (C) above. In particular, if one takes all rk equal to zero, condition (C) boils down to requiring that the ci constants be pairwise distinct. Theorem 2. Let n be a fixed integer. Denote by f ∗ a perfect random permutation of In . Let F = FM IL (f ∗ ) denote the random function of Fn,tn obtained by applying the MILENAGE construction of Figure 3 to f ∗ , and F ∗ denote a perfect random function of Fn,t·n . If the constants ck and rk (k = 1 · · · t) of the construction satisfy condition (C) above, then for any distinguishing algorithm
388
Henri Gilbert
A using any fixed number q of queries such that AdvA (F, F ∗ ) ≤
t2 q 2 2n
≤
1 6
we have
t2 q 2 2n+1
Proof. Let us X denote the set of q-tuples x = (x1 , · · · , xq ) of pairwise distinct In values and Z denote the set of q-tuples z = (z 1 = (z11 , · · · , zt1 ), z 2 = (z12 , · · · , zt2 ), · · · , z q = (z1q , · · · , ztq )) of pairwise distinct Int values, such that the tq values z11 , · · · , zt1 , · · · , z1q , · · · , ztq be pairwise distinct. We want to show that there exist positive real numbers 1 and 2 such that: |Z| > (1 − 1 )|Int |q and
(i)
F
∀x ∈ X∀z ∈ ZP r[x → z] ≥ (1 − 2 ) ·
1 |Int |q
(ii)
so that that Theorem 1 can be applied. We have |Z| 2n · (2n − 1) · · · (2n − tq + 1) = q |Int | 2nqt 1 qt − 1 = 1 · (1 − n ) · · · (1 − ) 2 2n 1 ≥ 1 − n · (1 + 2 + · · · + (qt − 1)) 2 1 2n · (1 + 2 + · · · + (qt − 1) 2 2 t . 1 = 2qn+1
Since
=
(qt−1)qt 2n+1
≤
q 2 t2 2n+1 ,
we have |Z| > (1 − 1 )|Int |q ,
with Let us now show that for any fixed q-tuple of In values x ∈ X and any q-tuple F 1 of Int values z ∈ Z, we have P r[x → z] ≥ 2ntq . For that purpose, let us consider from now on any two fixed q-tuples x ∈ X and z ∈ Z. Let us denote by Y the set of q-tuples of pairwise distinct In values F y = (y 1 , .., y q ). We can partition all the possible computations x → z according ∗ 1 ∗ q to the intermediate value y = (f (x ), · · · , f (x )) in the F computation. f∗ f∗ F P r[x → z] = P r[x → y ∧ ∀i ∈ {1..q}∀k ∈ {1..t}(rot(y i , rk ) ⊕ ck ) → zki ] y∈Y
Let us denote by Y the Y subset of those values y satisfying the three following additional conditions, which respectively express the requirement that all the f ∗ input values encountered in the q F computations be pairwise distinct (first and second condition), and that all the f ∗ outputs encountered in the same computations be also pairwise distinct (third condition). (I) ∀i ∈ {1..q}∀j ∈ {1..q}∀k ∈ {1..t}xi = rot(y j , rk ) ⊕ ck (II) ∀i ∈ {1..q}∀j ∈ {1..q}∀k ∈ {1..t}∀l ∈ {1..t} (i, k) = (j, l) ⇒ rot(y i , rk ) ⊕ ck = rot(y j , rl ) ⊕ cl
The Security of “One-Block-to-Many” Modes of Operation
389
(III) ∀i ∈ {1..q}∀j ∈ {1..q}∀k ∈ {1..t}y i = zkj We have f∗ f∗ F P r[x → y ∧ ∀i ∈ {1..q}∀k ∈ {1..t}(rot(y i , rk ) ⊕ ck ) → zki ] P r[x → z ≥ y∈Y
However, if y ∈ Y , Property 1 of Section 2 can be applied to the (t+1)q pairwise distinct f ∗ input values xi , i ∈ {1..q} and rot(y i , rk ) ⊕ ck , i ∈ {1..q}, k ∈ {1..t} and to the (t + 1)q distinct output values xi , i ∈ {1..q} and zki , i ∈ {1..q}, k ∈ {1..t}, so that f∗
f∗
(|In |−(t+1)q)! In ! (2n −(t+1)q)! = 2n !
P r[x → y ∧ ∀i ∈ {1..q}∀k ∈ {1..t}(rot(y i , rk ) ⊕ ck ) → zki ] =
F
n
Therefore, P r[x → z] ≥ |Y | (2 −(t+1)q)! (1) 2n ! A lower bound on |Y | can be established, based on the fact that |Y | =
2n ! (2n − q)!
(2)
and on the following properties: – The fraction of y vectors of Y such that condition (I) is not satisfied is less 2 than q2nt since for any fixed i ∈ {1..q}, j ∈ {1..q} and k ∈ {1..t} the number of | y ∈ Y q-tuples such that xi = rot(y j , rk ) ⊕ ck is (2n − 1) · · · (2n − q + 1) = |Y 2n and the set of the y vectors of Y such that condition (I) is not satisfied is the union set of these q 2 t sets. – The fraction of y vectors of Y such that condition (III) is not satisfied is less 2 than q2nt , by a similar argument. – The fraction of y vectors of Y such such that condition (II) is not satisfied · t(t−1) · 2n1−1 . As a matter of fact, given any is upper bounded by q(q−1) 2 2 two distinct pairs (i, k) = (j, l) of {1 · · · q} × {1 · · · t}, we can upper bound the number of y vectors of Y such that rot(y i , rk ) ⊕ ck = rot(y j , rl ) ⊕ cl by distinguishing the three following cases: case 1: i = j and k = l. Since condition (C) on the constants involved in F is satisfied, there exists no y vector of Y such that rot(y i , rk ) ⊕ ck = rot(y i , rl ) ⊕ cl . So case 1 does never occur. case 2: i = j and k = l. For any y vector of Y , y i = y j . But the rot(·, rk )⊕ck GF(2)-affine mapping of In is one to one. Thus, rot(y i , rk )⊕ ck = rot(y j , rk ) ⊕ ck . In other words, case 2 does never occur. case 3: i = j and k = l The number of Y q-tuples such that rot(y i , rk ) ⊕ | ck = rot(y j , rl )⊕cl is 2n ·(2n −2)·(2n −2)·(2n −3) · · · (2n −q +1) = 2|Y n −1 . Consequently, the set of y vectors of Y such such that condition (II) is not | satisfied is the union set of the q(q−1) · t(t−1) sets of cardinal 2|Y n −1 considered 2 2 in case 3, so that the fraction of y vectors of Y such such that condition (II) is not satisfied is upper bounded by q(q−1) · t(t−1) · 2n1−1 , as claimed before. 2 2
390
Henri Gilbert
As a consequence of the above properties, the overall fraction of the Y vectors 2 · t(t−1) · 2n1−1 , i.e. which do not belong to Y is less than 2q2n t + q(q−1) 2 2 |Y | ≥ (1 − (
2q 2 t q(q − 1) t(t − 1) 1 + ))|Y | 2n 2 2 2n − 1
(3)
Now (1) (2) and (3) result in the following inequality: F
P r[x → z] ≥ (1 − ( The
(2n −(t+1)q)! (2n −q)!
1 (2n − (t + 1)q)! 2q 2 t q(q − 1) t(t − 1) · · n )) · + n 2 2 2 2 −1 (2n − q)!
term of the above expression can be lower bounded as follows
1 (2n − (t + 1)q)! = n n n (2 − q)! (2 − q)(2 − q − 1) · · · (2n − ((t + 1)q − 1)) 1 1 = ntq · q q+1 2 (1 − n ) · (1 − n ) · · · (1 − (t+1)q−1 ) n 2
2
2
q q+1 (t + 1)q − 1 ≥ ntq · (1 + n ) · (1 + n ) · · · (1 + ) 2 2 2 2n 1 ≥ 1 + u) (due to the fact that if u < 1, 1−u 1 q q+1 (t + 1)q − 1 ≥ ntq · (1 + n + n + · · · + ) 2 2 2 2n 1 (t + 2)q − 1 = ntq (1 + tq ) 2 2n 1
Thus we have F
P r[x → z] ≥
1 2ntq
(1 − (
(t + 2)q − 1 2q 2 t q(q − 1) t(t − 1) 1 )) · (1 + tq + ) · · n 2n 2 2 2 −1 2n
1 (1 + ε)(1 − ε ) 2ntq (t + 2)q − 1 ∆ where ε = tq 2n 2 1 t 2q q(q − 1) t(t − 1) ∆ · · n and ε = n + 2 2 2 2 −1 =
Let us show that ε > 43 ε . Due to the inequality ε ≤
1 2n −1
qt (qt + 3q − t + 1) 2n+1
On the other hand, ε can be rewritten ε=
qt (2qt + 4q − 2) 2n+1
≤
2 2n ,
we have
The Security of “One-Block-to-Many” Modes of Operation
391
Therefore 4 qt 2 4 10 ε − ε ≥ n+1 ( qt + t − ) 3 2 3 3 3 4 10 2 ≥ 0 since t ≥ 2 and q ≥ imply ( qt + t − ) ≥ 0 3 3 3 Moreover, it is easy to see (by going back to the definition of ε and using the 2 2 2 2 fact that t ≥ 2) that ε ≤ 2t2nq , so that the condition t2nq ≤ 16 implies ε ≤ 13 . The relations ε ≥ 43 ε and ε ≤ 13 imply (1 + ε)(1 − ε ) ≥ 1 As a matter of fact (1 + ε)(1 − ε ) = 1 + ε − ε − εε ε ≥ 1 + ε − ε − 3 4 =1+ε− ε 3 ≥1 F
Thus we have shown that P r[x → z] ≥ We can now apply Theorem 1 with the upper bound
1 2ntq . 2 2 t 1 = 2q2n+1
AdvA (F, F ∗ ) ≤
q 2 t2 2n+1
and 2 = 0, so that we obtain QED
The unconditional security result of Theorem 2 is easy to convert (using a standard argument) to a computational security analogue. Theorem 3. Let f denote any random permutation of In . Let F = FM IL (f ) denote the random function of Fn,tn obtained by applying to f the MILENAGE construction of Figure 3 (where the constants ck and rk (k = 1 · · · t) are assumed to satisfy condition (C)). Let F ∗ denote a perfect random function of Fn,t·n . For 2 2 any q number of queries such that t2nq ≤ 16 , if there exists ε > 0 such that for any testing algorithm T with q(t + 1) queries and less computational resources (e.g. time, memory, etc.) than any fixed finite or infinite bound R the advantage AdvT (f, f ∗ ) of T in distinguishing f from a perfect n-bit random permutation f ∗ be such that AdvT (f, f ∗ ) < ε, then for any distinguishing algorithm A using q queries and less computational resources than R, AdvA (F, F ∗ ) < ε +
t2 q 2 2n+1
Proof. Let us show that if there existed a testing algorithm A capable to distinguish FM IL (f ) from a perfect random function F ∗ of Fn,nt with an advantage 2 2 t |p−p∗ | better than ε+ 2qn+1 using less computational resources than R, then there would exist a testing algorithm T allowing to distinguish f from a perfect random permutation with q(t+1) queries and less computational resources than R with a distinguishing advantage better that . The test T of a permutation ϕ would just
392
Henri Gilbert
consist in performing the test A on FM IL (ϕ). The success probability p of the 2 2 t algorithm A applied to F (f ∗ ) would be such that |p − p∗ | ≤ 2qn+1 (due to Theo rem 2), and therefore, due to the triangular inequality |p−p |+|p −p∗ | ≥ |p−p∗ |, one would have |p − p | ≥ ε, so that the advantage of T in distinguishing f from f ∗ would be at least ε QED. The following heuristic estimate of the success probability of some simple distinguishing attacks against the MILENAGE mode of operation indicates that 2 2 t bound obtained in Theorem 2 is very tight, at least in the case where the 2qn+1 the ri rotation amounts are equal to zero. Let us restrict ourselves to this case. Let us consider a z = (z 1 , · · · , z q ) q-tuple of FM IL output value, where each z i represents a t-tuple of distinct In values z1i , · · · , zti Given any two distinct indexes i and j, the occurrence probability of a collision of the form zki = zlj 2 can be approximated (under heuristic assumptions) by 2tn , so that the overall t2 collision probability among the qt output blocks of FM IL is about q(q−1) 2 2n . Moreover, each collision represents a distinguishing event with an overwhelming probability, due to the fact that zki = zlj implies zkj = zli . Thus the distinguishing 2 2 t . This does not probability given by this “attack” is less than (but close to) 2qn+1 hold in the particular case where q = 1, but in this case then another statistical bias, namely the fact that no collisions never occur among the t output blocks, provides a distinguishing property of probability about t(t−1) 2n+1 , which is again close to
5
q 2 t2 2n+1 .
Analysis of the Modified OFB Mode of Operation
The following analogue of Theorem 2 above can be established for the modified OFB mode of operation (cf Figure 4) introduced in Section 3 . Theorem 4. Let n be a fixed integer. Denote by f ∗ a perfect random permutation of In . Let F = FM OF B (f ∗ ) denote the random function of Fn,tn obtained by applying the modified construction of Figure 4 to f ∗ , and F ∗ denote a perfect random function of Fn,t·n . For any distinguishing algorithm A using any fixed 2 2 number of queries q such that t2nq ≤ 1 we have AdvA (F, F ∗) ≤
7t2 q 2 2n+1
Proof sketch: the structure of the proof is the same as for the MILENAGE construction. We consider the same X and Z sets of q-tuples as in Section 4. 2 2 t . For any fixed As established in Section 4, |Z| ≥ (1 − 1 ), where 1 = 2qn+1 x ∈ X and z ∈ Z q-tuples of input and output values, it can be shown that P r[x
FM OF B (f ∗)
→
1 with 1 =
q 2 t2 2n+1
3q 2 t2 1 2ntq (1 − 2 ), with 2 = 2n . 2 2 and 2 = 3q2nt , so that we obtain
z] ≥
AdvA (F, F ∗) ≤
7q 2 t2 2n+1
We can now apply Theorem the upper bound QED
The Security of “One-Block-to-Many” Modes of Operation
6
393
Conclusion
We have given some evidence that although “one-block-to-many” modes of operation of blockciphers are not as well known and systematically studied so far as “many-blocks-to-one” MAC modes, both kinds of modes are of equal significance for applications such as mobile communications security. We have given security proofs, in the Luby-Rackoff security paradigm, of two simple one to many blocks modes, in which all invocations of the underlying blockciphers involve the same key. We believe that the following topics would deserve some further research: – systematic investigation of alternative one to many blocks modes, e.g. modes involving more than one key, or modes providing security “beyond the bithday paradox”; – formal proofs of security for hybrid modes of operation including an expansion function, for instance for the combination of the expansion function x → (z1 , z2 , z3 , z4 ) and the message authentication function (x, y) → z0 provided by the complete MILENAGE construction.
Acknowledgements I would like to thank Steve Babbage, Diane Godsave and Kaisa Nyberg for helpful comments on a preliminary version of the proof of Theorem 2. I would also like to thank Marine Minier for useful discussions at the beginning of this work.
References [BDJR97]
[BKR94] [BM84] [BR00]
[DHY02] [EJ02] [GL89]
M. Bellare, A. Desai, E. Jokipii, P. Rogaway, “ A Concrete Security Treatment of Symmetric Encryption: Analysis of the DES Modes of Operation”, Proceedings of 38th Annual Symposium on Foundations of Computer Science, IEEE, 1997. M. Bellare, J. Kilian, P. Rogaway, ”The Security of Cipher Block Chaining”. , Advances in Cryptology - CRYPTO’94, LNCS 839, p. 341, SpringerVerlag, Santa Barbara, U.S.A., 1994. M. Blum, S. Micali, “How to Generate Cryptographically Strong Sequences of Pseudo-Random Bits” SIAM J. Comput. 13(4), p. 850-864, 1984 J. Black, P. Rogaway, “A Block-Cipher Mode of Operation for Parallelizable Message Authentication”, Advances in Cryptology – Eurocrypt 2002, Lecture Notes in Computer Science, Vol. 2332, Springer-Verlag, pp. 384– 397, 2002. A. Desai, A. Hevia, Y. Yin, “A Practice-Oriented Treatment of Pseudorandom Number Generators“, Eurocrypt 2002, Lecture Notes in Computer Science, Vol. 2332, Springer-Verlag, 2002. P. Ekdahl, T. Johansson, “A new version of the stream cipher SNOW”, proceedings of SAC’02. O.Goldreich, L.Levin, “A hard-core predicate for all one-way functions”, Proc. ACM Symp. on Theory of Computing, pp. 25-32, 1989
394
Henri Gilbert
[HCCJ02] [HN00]
[JJV02]
[Ka00]
[LR88] [Ma92] [Mi00]
[Pa91] [Pa92] [PR00] [RC98] [Va98] [Va99]
S. Halevi, D. Coppersmith, C.S. Jutla,“Scream: A Software-Efficient Stream Cipher”, Advances in Cryptology - FSE 2002, p. 195-209, Springer Verlag, 2002. J. Hastad and M. N¨ aslund, “BMGL: Synchronous Key-stream Generator with Provable security”, Revision 1, March 6, 2001) and “A Generalized Interface for the NESSIE Submission BGML”, March 15, 2002, available at http://www.cosic.esat.kuleuven.ac.be/nessie/ E. Jaulmes, A. Joux, F. Valette, ” On the Security of Randomized CBCMAC Beyond the Birthday Paradox Limit: A New Construction.”, Advances in Cryptology - FSE 2002, p. 237-251, Springer Verlag, 2002, and iacr eprint archive 2001/074 3rd Generation Partnership Project - Specification of the 3GPP confidentiality and integrity algorithms ; Document 2 (TS 35.202): KASUMI algorithm specification ; Document 1:TS 35.201 f8 and f9 specifications ; Docment TR 33.904: Report on the Evaluation of 3GPP Standard Confidentiality and Integrity Algorithms, available at http://www.3gpp.org M. Luby, C. Rackoff, “How to Construct Pseudorandom Permutations from Pseudorandom Function”, Siam Journal on Computing , vol. 17, p. 373, 1988. U. Maurer, ”A Simplified and generalised treatment of Luby-Rackoff Pseudo-random Permutation Generators”, Advances in Cryptology - Eurocrypt’92, LNCS 658 , p. 239, Springer Verlag, 1992. 3rd Generation Partnership Project - Specification of the MILENAGE algorithm set: An example algorithm Set for the 3GPP Authentication and Key Generation functions f1, f1*, f2, f3, f4, f5 and f5* - Document 2 (TS 35.206): Algorithm specification ; Document 5 (TR 35.909): Summary and results of design and evaluation, available at http://www.3gpp.org J. Patarin, “Etude de G´en´erateurs de Permutation Bas´es sur le Sch´ema du D.E.S.”, Phd. Thesis, University of Paris VI, 1991. J. Patarin, “How to Construct Pseudorandom and Super Pseudorandom Permutations from One Single Pseudorandom Function”, Advances in Cryptology - Eurocrypt’92, LNCS 658 , p. 256, Springer Verlag, 1992. E. Petrank, C. Rackoff,“CBC MAC for Real-Time Data Sources”, Journal of Cryptology 13(3), p. 315–338, 2000 P. Rogaway, D. Coppersmith, “A Software-Optimized Encryption Algorithm”, Journal of Cryptology 11(4), p. 273-287, 1998 S. Vaudenay, “Provable Security for Block Ciphers by Decorrelation”, STACS’98, Paris, France,Lecture Notes in Computer Science No. 1373, p. 249-275, Springer-Verlag, 1998. S. Vaudenay, “On Provable Security for Conventional Cryptography”, Proc. ICISC’99, invited lecture.
Appendix: A Short Proof of Theorem 1 Let us restrict ourselves to the case of any fixed deterministic algorithm A which uses q adaptively chosen queries (the generalization to the case of a probabilistic algorithm is easy). A has the property that if the q-tuple of outputs encountered during an A computation is y = (y 1 , · · · , y q ), the value of the q-tuple x = (x1 , · · · , xq ) of
The Security of “One-Block-to-Many” Modes of Operation
395
query inputs encountered during this computation is entirely determined. This is easy to prove by induction: the initial query input x1 is fixed ; if for a given A computation the first query output is y 1 , then x2 is determined, etc.. We denote by x(y) the single q-tuple of query inputs corresponding to any possible y qtuple of query outputs, and we denote by SA the subset of those y ∈ Im q values such that if the q-tuples x(y) and y of query inputs and outputs are encountered in a A computation, then A outputs the answer 1. The probabilities p and p∗ can be expressed using SA as F p = y∈SA P r[x(y) → y] and F∗ p∗ = y∈SA P r[x(y) → y] We can now lower bound p using the following inequalities: F∗ p ≥ y∈SA ∩Y (1 − 2 ) · P r[x(y) → y] due to inequality (ii) F∗ F∗ ≥ y∈SA (1 − 2 ) · P r[x(y) → y] − y∈Im q −Y (1 − 2 ) · P r[x(y) → y] F∗ But y∈SA (1 − 2 ) · P r[x(y) → y] = (1 − 2 ) · p∗ and F∗ |Im |q −|Y | ≤ (1 − 2 ) · 1 due to y∈Im q −Y (1 − 2 ) · P r[x(y) → y] = (1 − 2 ) · |Im |q inequality (i). Therefore, p ≥ (1 − 2 )(p∗ − 1 ) = p∗ − 1 − 2 · p∗ + 1 · 2 thus finally (using p∗ ≤ 1 and 1 · 2 ≥ 0) p ≥ p ∗ −1 − 2 (a) If we now consider the distinguisher A which outputs are the inverse of those of A (i.e. A answers 0 iff A answers 1), we obtain an inequality involving this time 1 − p and 1 − p∗ : (1 − p) ≥ (1 − p∗ ) − 1 − 2 (b) Combining inequalities (a) and (b), we obtain |p − p∗ | ≤ 1 + 2 QED.
Author Index
Akkar, Mehdi-Laurent
192
Babbage, Steve 111 Biham, Eli 9, 22 Biryukov, Alex 45, 274 Boesgaard, Martin 307 Canni`ere, Christophe De 111, 274 Carlet, Claude 54 Christiansen, Jesper 307 Dunkelman, Orr
9, 22
Ferguson, Niels 330 Fouque, Pierre-Alain 362 Fuller, Joanne 74 Gilbert, Henri 376 Goli´c, Jovan Dj. 100 Goubin, Louis 192 Hawkes, Philip 290 Hong, Dowon 154 Iwata, Tetsu Joux, Antoine Junod, Pascal
129 87, 170 235
Kang, Ju-Sung 154 Keller, Nathan 9, 22 Kelsey, John 330 Knudsen, Lars R. 182 Kohno, Tadayoshi 182, 330 Kurosawa, Kaoru 129 Lano, Joseph 111 Lee, Sangjin 247
Lim, Jongin 247 Lucks, Stefan 330 Martinet, Gwena¨elle 362 Millan, William 74 Morgari, Guglielmo 100 Muller, Fr´ed´eric 87 Paar, Christof 206 Pal, Pinakpani 347 Park, Sangwoo 247 Pedersen, Thomas 307 Poupard, Guillaume 170, 362 Preneel, Bart 111, 154 Prouff, Emmanuel 54 Raddum, H˚ avard 1 Rose, Gregory G. 290 Ryu, Heuisu 154 Saarinen, Markku-Juhani O. Sarkar, Palash 347 Scavenius, Ove 307 Schneier, Bruce 330 Schramm, Kai 206 Seberry, Jennifer 223 Song, Beomsik 223 Stern, Jacques 170 Sung, Soo Hak 247 Vandewalle, Joos 111 Vaudenay, Serge 235 Vesterager, Mette 307 Wall´en, Johan 261 Whiting, Doug 330 Wollinger, Thomas 206
36