On 5/28/24 12:12 AM, Eric Wheeler wrote:
On Wed, 15 May 2024, Mikulas Patocka wrote:
Hi
Some NVMe devices may be formatted with extra 64 bytes of metadata per
sector.
Here I'm submitting for review dm-crypt patches that make it possible to
use per-sector metadata for authenticated encryption. With these patches,
dm-crypt can run directly on the top of a NVMe device, without using
dm-integrity. These patches increase write throughput twice, because there
is no write to the dm-integrity journal.
An example how to use it (so far, there is no support in the userspace
cryptsetup tool):
# nvme format /dev/nvme1 -n 1 -lbaf=4
# dmsetup create cr --table '0 1048576 crypt
capi:authenc(hmac(sha256),cbc(aes))-essiv:sha256
01b11af6b55f76424fd53fb66667c301466b2eeaf0f39fd36d26e7fc4f52ade2de4228e996f5ae2fe817ce178e77079d28e4baaebffbcd3e16ae4f36ef217298
0 /dev/nvme1n1 0 2 integrity:32:aead sector_size:4096'
Thats really an amazing feature, and I think your implementation is simple
and elegant. Somehow reminds me of 520/528-byte sectors that big
commercial filers use, but in a way the Linux could use.
Questions:
- I see you are using 32-bytes of AEAD data (out of 64). Is AEAD always
32-bytes, or can it vary by crypto mechanism?
Hi Eric,
I'll try to answer this question as this is where we headed with dm-integrity+dm-crypt
since the beginning - replace it with HW and atomic sector+metadata handling once
suitable HW becomes available.
Currently, dm-integrity allocates exact space for any AEAD you want to construct
(cipher-xts/hctr2 + hmac) or for native AEAD (my favourite is AEGIS here).
So it depends on configuration, the only difference to dm-integrity is that HW allocates
fixed 64 bytes so that crypto can use up to this space, but it should be completely
configurable in dm-crypt. IOW real used space can vary by crypto mechanism.
Definitely, it is now enough for real AEAD compared to legacy 512+8 DIF :)
Also, it opens a way to store something more (sector context) in metadata,
but that's an idea for the future (usable in fs encryption as well, I guess).
- What drive are you using? I am curious what your `nvme id-ns` output
looks like. Do you have 64 in the `ms` value?
# nvme id-ns /dev/nvme0n1 | grep lbaf
nlbaf : 0
nulbaf : 0
lbaf 0 : ms:0 lbads:9 rp:0 (in use)
^ ^512b
This is the major issue still - I think there are only enterprisey NVMe drives that
can do this.
Milan