Re: Several issues with cryptsetup 2.0.0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks for asking and please sent whatever another questions you have here.

We need feedback, and seems the community is the only way how to get it.

(After 3rd reject of a paper I am sure that storage conferences are
not good place to expect any relevant feedback on storage security.
Seem only performance matters these days in reviews...
ok, it is just my problem 8-]

Answers in text below, and because some of the decisions are based on my
pragmatic approach, I am of course open to change some LUKS2 defaults
if it makes sense and we have enough arguments.

The whole goal of LUKS2 is to go the way of improving security,
there will be mistakes and some corrections later, but seems nobody else
care, so I tried to push it forward...


And we should collect these comments somewhere (maybe FAQ), I plan to do it
but I am currently completely saturated by other things... sorry for that.


On 01/14/2018 02:21 AM, curve25519@xxxxxxxxxxx wrote:
> Hi everyone,
 
> But first I have a question related to the mailing list:
> How is it possible that the dm-crypt mailing list web interace and admin
> panel can't be accessed via a secure TLS or at least some broken old
> SSL connection? As in: can somebody please fix this?

Jana Saout is running this. Jana, please do you have a plan to fix it?

I do not want to move the list to another provider, it worked here for years,
but the TLS issue is really concerning me as well (and I am not admin).

...

> 1. The output of "sudo luksDump /dev/sdd1" looks pretty ok, except that
> Argon2i uses only the absolute minimum amount of RAM 131072 KiB/128MiB.
> I read in one of Milans last posts on this list that --iteration-time
> takes precedence over the memory setting. But as I used the default
> options with 8GB of RAM I would have expected something way more in line
> with the examples on parameter choice from the IRTF Argon Draft paper
> (p. 12f).

Do you use system optimized libargon2 or cryptsetup compiled-in fallback
(that uses not optimized reference code)? Optimized code behaves better,
but it is not available in many distros (it is available in Debian).

But really, that is how the benchmark is designed, it prefers time.
If you want to really use more memory here, you have to increase the time cost as well...

But definitely expect that the benchmarks will have some tweaks once we have
more data how it behaves on users systems.

> 2. luksFormat fails with -1 if aes-xts-essiv is used
> This is probably expected, but I wanted to mention it anyways, as a poor
> mans version of the ESSIV+random IV Milan mused about in
> http://www.saout.de/pipermail/dm-crypt/2017-December/005778.html
> Anyways: is there something I can do to use aes-xts-essiv as a slight
> improvement over aes-xts-plain64? Or is this a stupid idea altogether?

Do not use ESSIV with XTS. Never.

First, there is a nasty bug with LUKS2 and ESSIV I just post info about.

But mainly using ESSIV instead of plain64 for XTS mode only slow downs
IV generation, it has no effect to security. The IV is just encrypted,
but because XTS run it through encryption block internally anyway,
I do not see any reason to use ESSIV here.

For the integrity-protected (authenticated) modes it is something different,
but I would prefer to go with random IV with modes that allows 128bit nonces/IVs in future.

 
> The following comments refer to using  --pbkdf argon2id --iter-time 2500
> --pbkdf-memory 1048576 --pbkdf-parallel 4 in additon to the settings
> from the example above:
> 
> 
> 3. Why does cryptsetup luksFormat allow at max 1048576kb (~1GB) of
> memory usage for Argon2? This seems incredibly low compared to the
> parameter choice recommendations from the Argon2 RFC draft. Even if you
> don't use those as defaults, why would you ever set an upper limit which
> is way below the recommendations?

This is very good question. It was my decision to internally limit it until
we are sure how Argon2 behaves. It was introduced before the physical-memory
limits and perhaps should be increased now.

My major concern here is maintenance - there is a lot of systems that just will
not have >1GB memory available. Unfortunately, in reality we see that Linux systems
kill the process here (OOM) instead of returning -ENOMEM, so we cannot even print any
useful messages. People will complain.

I would also expect some strange process limits that various container or systemd
services (and discussion with people developing these is not something I really enjoy TBH).

I know it limits the Argon2 use and I plan to increase it. Perhaps we should make
it configure time option.

I am not sure how the parameters in RFC draft were calculated, but IMO they are not
usable in general with many of systems today.

But you are right, this is something we should change.

> 4. Similar to the memory setting the thread count seems to be capped at
> the number of processor cores, even tough the IRTF Argon Draft paper
> explicitly uses twice the amount of cores in ALL examples on parameter
> choice. Again, this might be acceptable as a default, but why is it a
> hard limit? Even if there is a good reason to do so (I can't imagine
> one), why is the user input silently ignored instead of throwing an
> error as with the memory setting?

The reason is similar to previous one - we set limits to be sane and if it works,
we can slightly slow down it later.
The user input should not be ignored if you have enough CPUs.
(And I do not think it is good idea to use twice many parallel cost as physical
CPUs, this will our bencmark calculate something completely different on systems
where it has enough CPU and were it causes thread switch craziness.)

Also it will behave differently on different architectures (for example CPU
in Power systems is completely virtualized).

So the truly answer: these were sane defaults I set, perhaps they are wrong and need
to be updated. Just we need more people to use it to be sure we did not break anything.

Maybe I am just too careful, dunno.

> 5. No matter if I explicitly force the use of Argon2i or Argon2id,
> digests are still hardcoded to use pbkdf2.

Yes, and it is intentional. Using Argon in digest does not make any sense.

On input for PBKDF2 digest should be a random key, digest should only verify
that the key is correct and do not allow any brute force speed up.

Moreover, for brute-force search for volume key (for example is some flawed RNG is found)
attacker will not use digest comparison at all - he will try to decrypt one sector that contains
known plaintext (ext4 magic for example) and this will be always much faster than any digest check.

So we have possibility to upgrade digest algorithm in future for LUKS2, but I think
it really does not make any sense now. So I kept it compatible with LUKS1.


> 6. However it seems much more concerning to me that only 1000 rounds of
> pbkdf2 are applied to digests when --pbkdf-force-iterations is used
> (independent of --pbkdf parameter).

Yes, see above. This will not help attacker even if it has only 1 iteration.

The keyslot iterations matters. I used dynamic iteration count for keyslot later
(Clemens uses fixed 10 iteration in early LUKS headers) but because of attack mentioned
above (known sector plaintext decryption) it actually does not help anything.

More info about this attack is in the paper
Bossi, Visconti: "What Users Should Know About Full Disk Encryption Based on LUKS",
I think it should be available in some preprint for free.

> 7. I did encounter some silent failures in my first tests (luksFormat
> finished fine, but device couldn't be mounted later) when using high
> values for --iter-time (working: up to ~2500, definately failing:
> 5000-10000) or --pbkdf-force-iterations (working: up to ~4, definately
> failing: 10, 12). However this was before I swapped the existing 6 GB of
> normal RAM with 8GB of ECC RAM. This might either be explained by the
> RAM beeing faulty or the lower amount. But as I couldn't verify this
> anymore today, I won't bother you with more details on this.

Well, I need --debug log for these issues. But what you describe is strange,
if it was hw fault, then anything can happen...


> Other comments:
> 
> 8. From the ML:
> I [Milan Broz] disagree with Argon2id as a default [decision by Argon2
> authors]. [...] So final parameter set was decision after some test runs
> on real systems.
> 
> Could you please elaborate on your decision? I couldn't notice any
> performance issues with Argon2id in my (admittedly few) tests and I
> personally don't feel that the default values should be so far removed
> from the recommendations of the original authors who explicitly favor
> Argon2id in an FDE scenario.

This is just my (and some other people) opinion, but I am practitioner,
so I can be easily wrong.

The Argon2id was added after PHC ended because of some TMTO attacks
to Argon2i. Argon2d is data dependent, so it possible contains side-channels,
while Argon2i is data independent, but with possible memory-computation trade-offs.

So authors divided it 50/50 for Argon2id. I think that side channel attacks
are worse in LUKS context (imagine container opening some device watching it by another process).
To oversimplify it:
if attacker can use side channel to get the first half, we have only half cost for Argon2i,
that makes it is even worse.
So I decided to make default Argon2i, despite TMTO thing. (And it is configurable option.)
For more info see for example
 https://crypto.stackexchange.com/questions/48935/why-use-argon2i-or-argon2d-if-argon2id-exists

Actually I asked what we should do on some cryptography workshop and the answer was,
that we should wait for some better memory-hard algorithm and that PHC should be re-run in fact.
That is not the option. I understand it, but we should be better prepare to change later than wait
- PBKDF2 is so perfectly optimized on GPUs that we have problem already now.

PBKDF in LUKS2 header can be later changed by cryptsetup-reencrypt tool without touching data
(resp. this function will be there soon) so we will have way how to change it later in-place.

> Together with the low defaults and especially the upper limits for
> --pbkdf-memory and --pbkdf-parallel this doesn't really inspire
> confidence. At the very least the reasoning which lead to the choice of
> those defaults requires a coherent explanation in the man page.

Man page is not good place to explain this. But I would welcome some advice,
if you have strong arguments why Argon2id should not be default, I am listening and we could
change it later.

> The upper limits are a mystery to me tough and make me feel patronized.
> Even if choosing too big values would make cryptsetup crash this would
> at least be the consequence of a free decision of a (hopefully) informed
> user. Nobody prevents me from rm -rf / either.

ok, I understand.
Anyway, thanks for this opinion, there seems to be really contradiction with
basic philosophy of cryptsetup (it will let you to use ECB if you want for example).

Maybe we should print just some warning and let user shoot in the foot.

> 9. On luksDump output format:
> - "Time:" below PBKDF algorithm seems to match the PBKDF iterations
> instead of --iter-time...this should probably be renamed. This only
> applies to Argon2, it's already called Iterations when using PBKDF2.

Current code should display "Time cost", from Argon2 libray we use:

@param t_cost Number of iterations

While is is number of iteration for Argon, it can be something different in later
added PBKDF, my intention was not name it for every algorithm separately (PBKDF2 is exception)
(the usability problem that we have already -i /--iteration CLI option for it...)

Not sure if it is worth to change it now, these fields are used in scripts
and change will break them.

> 10. This is obviously a minor nitpick but it seems cryptsetup benchmark
> still uses the 800ms default which afair was bumped up to 2000 as OOM
> killer prevention. It would be nice if the default values were also used
> for the benchmark.

Yes, it is wrong. There will be more such small fixes.

> 11. The default numbers for --pbkdf-memory compared with the
> defaults/minimums claimed on this list (e.g. 131072 kB vs 128MB) don't
> match up, which seems to indicate Kibibytes, and Mebibytes were meant.
> However kilobytes is used in the manpage and output. Could you enlighten
> me if the base unit is actually kB or KiB?

Every time I use SI term kibibytes, someone start to scream :-)

I think all printd numbers are in 2^x (1024) and not SI units (10^x), so only
unit description is inconsistent. We should unify it in text.

Thanks!
Milan
_______________________________________________
dm-crypt mailing list
dm-crypt@xxxxxxxx
http://www.saout.de/mailman/listinfo/dm-crypt



[Index of Archives]     [Device Mapper Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

  Powered by Linux