On 14/05/2021 20:10, Clemens Fruhwirth wrote: > On Fri, 14 May 2021 at 17:51, Milan Broz <gmazyland@xxxxxxxxx> wrote: >> >> On 14/05/2021 17:22, Clemens Fruhwirth wrote: >>> On Fri, 14 May 2021 at 15:44, Milan Broz <gmazyland@xxxxxxxxx> wrote: >> >>>> From key file: The complete keyfile is read up to the compiled-in maximum size. Newline characters do not terminate the input. The --keyfile-size >>>> option can be used to limit what is read. >> >>> Did I chose this "up to the compiled-in maximum size" either >>> explicitly or implicitly back in the days? Checking get_key inside >>> lib/utils.c in the ancient release 1.0.6 from some time in 2007 looks >>> as if there was no such limit. >> >> Hard limit was patch that I added later (in 1.3.0) - and the default is 8MB for keyfiles. >> >> If you use /dev/urandom or something like that, it eats all your memory after some time and crashes. > > I am not sure why anyone would ever read keys from /dev/urandom. Maybe > to create throw away encrypted swap partition, but supplying > --key-size should resolve this. In the LUKS case, I think that > /dev/urandom never makes sense as key material. I don't have strong > opinions on whether a tool should protect you against OOMs when you > give it an infinitely large file to read. Probably not. There was some real issue with that, but really, it was introduced in 2011 and I already forgot details. Limiting maximal keyfile size to some reasonable value makes sense to me. (You can set the limit during compilation time anyway.) >> Also, with PBKDF2 there is nasty property, that you can hash input in advance, >> and the output is the same - so super-large keyfiles do not make much sense. >> (With Argon2 it is no longer the case.) > > I think that precomputability is a desired property of an HMAC, and as > PBKDF2 uses it as a building block, we kind of inherit that. > Interestingly, with HMAC-SHAS1 we could support keys larger than our > memory, as with HMAC-SHA1(K,m) we don't need to ever keep K in memory > in full.[1] But that's really not a worthwhile goal to begin with. > Argon2 is certainly the better choice. > > But I don't think that you can produce output that is "the same" > regardless of key size with PBKDF2 -- at least in theory -- as the > derived key is a concatenation of xor-ed PRF outputs. If you disagree, > maybe you want to elaborate on that :). What I meant is that HMAC property in PBKDF2-HMAC, if the input is longer than internal block size of used hash (for SHA1 64 bytes), then it uses hash of input. So both input and SHA1(input) in PBKDF2-HMAC-SHA1 will produce the same derived key. https://en.wikipedia.org/wiki/PBKDF2#HMAC_collisions It was only problem when we did not run this optimization then PBKDF2 code actually calculates a lot of unneeded work (while attacker running brute force can just hash it in advance). (Older cryptsetup release has this problem, now it uses crypto library for PBKDF2 anyway.) Milan _______________________________________________ dm-crypt mailing list -- dm-crypt@xxxxxxxx To unsubscribe send an email to dm-crypt-leave@xxxxxxxx