Thank you all for your advises.
Meanwhile, I have read Practical Cryptographic Data Integrity Protection with Full Disk Encryption Extended Version (1 jul 2018). Very interesting publication. :-)
Meanwhile, I have read Practical Cryptographic Data Integrity Protection with Full Disk Encryption Extended Version (1 jul 2018). Very interesting publication. :-)
1) Performances seems pretty correct for linear access write 4k blocks NO JOURNAL. Nevertheless, you don't indicate how you realise this test (cp, dd, fio tool...?).
2) I will reconsider doing again my stack. (No datas on it) Have you any advises related to fs, RAID0, luksformat (type luk2) creation? Indeed, I don't clearly understand this point :
"dm-integrity interleaves data and metadata and consequently data are not aligned on raid stripe boundary."
"dm-integrity interleaves data and metadata and consequently data are not aligned on raid stripe boundary."
According to what I inderstand this will kill my RAID0 main goal for getting fast performance.
3) I want integrity features and I want to try the no journal option for performance gain. How to use it ?
cryptsetup -v luksFormat --type luks2 /dev/md127 --cipher aes-gcm-random --integrity aead --integrity-no-journal
- 4) I am not sure to understand the relation between sector size of each layer of the stack as we introduce a layer (dif) with metadata under dm-crypt layer and above storage devices. Does 4096 bytes must be choosen for all layers?
- --sector-size <bytes>
- Set sector size for use with disk encryption. It must be power of two and in range 512 - 4096 bytes. The default is 512 bytes sectors. This option is available only in the LUKS2 mode. Note that if sector size is higher than underlying device hardware sector and there is not integrity protection that uses data journal, using this option can increase risk on incomplete sector writes during a power fail. If used together with --integrity option and dm-integrity journal, the atomicity of writes is guaranteed in all cases (but it cost write performance - data has to be written twice). Increasing sector size from 512 bytes to 4096 bytes can provide better performance on most of the modern storage devices and also with some hw encryption accelerators
- Thank you very much for explainations. Any links, articles, documents answering to my interrogation are welcomed.(I have already read https://manpages.debian.org/unstable/cryptsetup-bin/cryptsetup.8.en.html and your publication)
I will try to make the stack again with --debug option.
King regards
Le ven. 19 oct. 2018 à 08:47, Milan Broz <gmazyland@xxxxxxxxx> a écrit :
On 18/10/2018 19:51, Mikulas Patocka wrote:
> On Fri, 12 Oct 2018, laurent cop wrote:
>
>> Hello,
>>
>> I have trouble while in top of the stack :
>> mkfs.ext4 /dev/mapper/raid0luks2
>> /dev/mapper/raid0luks2 alignement is offset by 147456 bytes
>> This may result in very poor performance (re-) partitioning is suggested.
>> I have the same pb with mkfs.xfs
>
> Just ignore this warning. dm-integrity interleaves data and metadata and
> consequently data are not aligned on raid stripe boundary.
>
>> I am using following scheme:
>>
>> LUKS2 => created with cryptsetup -v luksFormat --type luks2 /dev/md127 --cipher aes-gcm-random --integrity aeae
>> RAID 0 => created with mdadm chunk=32
>> 3 Disks NVME => partition with gdisk (fist sector : 2048)
>>
>> I got this information with lsblk -t for each 3 disks, i got the same information.
>> nvmeXn1
>> -> NvmeXn1p1 ALIGN = 0
>> -> md127 ALIGN = 0
>> -> raid0luks2_dif ALIGN = 147456
>> -> raid0luks2 ALIGN = 147456
>> 1) How can I solve my alignment issue?
>>
>> 2) Is it normal to have low performances while writing with dd. I was
>> using LUKS previously and I got only one dev mapper. Now I got 2. Does
>> it have a big impact on performances?
>>
>> Kind regards.
>
> dm-integrity already slows down writes about 3 times. At this situation,
> trying to align accesses to raid stripe size doesn't make much sense.
Actually, I think that the alignment reported there does not make any
sense for dm-integrity (here is not a simple offset, data and metadata are interleaved,
there is a journal... Dm-integrity here behaves more like a filesystem - what a single
offset means?
Anyway, Mikulas disagrees with me to simple remove it :-)
> If you want to improve performance, you may try to put two dm-ingerity
> images directly on the SSDs and create raid-0 on the top of them. It may
> perform better, but you'll have to benchmark it with the workload you are
> optimizing for.
You can do this with LUKS2 authenticated encryption as well, but I am not sure
it is good idea - it will eat CPU time for encryption for each raid 0 member.
Milan
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel