On Mon, Nov 23, 2009 at 03:29:51PM +0100, Milan Broz wrote: > On 11/23/2009 02:45 PM, Roscoe wrote: > > On Thu, Nov 19, 2009 at 5:41 PM, Arno Wagner <arno@xxxxxxxxxxx> wrote: > >> If I understand this correctly, this is the "iteration-count" > >> parameter to PBKDF2. If so, then RFC 2898 recommends a minimum > >> count of 1000 anyways. This is hovever not protection against > >> a broken hash, as even a very weak hash should be extremely > >> hard to break when iterated 10 times. The main purpose of this > >> parameter is to make exhaustive search more expensive. I think > >> this should definitely go up to 1000. > > > > I'd like to point out that this use of PBKDF2 is is not as a KDF but > > as a hash function. The recommendations in the RFC 2898 will be from a > > KDF perspective. The idea of someone doing an exhaustive search in the > > LUKS mk context seems silly (RNG quality aside) > > You are right, but note for normal exhaustive key search (brute force, > because key is random here) you need to know some plain text > on the disk (not problem with known FS signature though). > > Here you have always digest of hashed key (+ known salt). > If the PBKDF2 with iterations is very quick and cheap, > the exhaustive search for key candidates is much more > simpler here than other technique. > > Adding iterations for mk digest is quite cheap and almost make > this search unusable (even if it is just theoretic or silly attack). > > Or am I missing anything? Well, it depends on th size of the key and its entropy contents. If it is a >= 128 bit high entropy key, then exhaustive search will not work, no matter what. If the entropy is lower (bad PRNG), a high interation count will make the search a lot harder. As we have seen in the past, messing up randomness for crypto does happen. Example: The last Debian problem ith OpenSSL, although the resulting key space was so small (16 bit), that requiring a second of hashing would only have lead to a day or so of attack time. But if a key has only, say, 40 bits of entropy, a second of hashing gives something like 17'000 years of CPU time for exhaustive search. Doable but expensive. Bottom line: This should not be needed, but it does make the whole construct more resilient against bad PRNG and it has not real cost as it is a one-time effort on volume mount. Arno -- Arno Wagner, Dr. sc. techn., Dipl. Inform., CISSP -- Email: arno@xxxxxxxxxxx GnuPG: ID: 1E25338F FP: 0C30 5782 9D93 F785 E79C 0296 797F 6B50 1E25 338F ---- Cuddly UI's are the manifestation of wishful thinking. -- Dylan Evans If it's in the news, don't worry about it. The very definition of "news" is "something that hardly ever happens." -- Bruce Schneier _______________________________________________ dm-crypt mailing list dm-crypt@xxxxxxxx http://www.saout.de/mailman/listinfo/dm-crypt