On Fri, 12 Oct 2018, laurent cop wrote:
> Hello,
>
> I have trouble while in top of the stack :
> mkfs.ext4 /dev/mapper/raid0luks2
> /dev/mapper/raid0luks2 alignement is offset by 147456 bytes
> This may result in very poor performance (re-) partitioning is suggested.
> I have the same pb with mkfs.xfs
Just ignore this warning. dm-integrity interleaves data and metadata and
consequently data are not aligned on raid stripe boundary.
> I am using following scheme:
>
> LUKS2 => created with cryptsetup -v luksFormat --type luks2 /dev/md127 --cipher aes-gcm-random --integrity aeae
> RAID 0 => created with mdadm chunk=32
> 3 Disks NVME => partition with gdisk (fist sector : 2048)
>
> I got this information with lsblk -t for each 3 disks, i got the same information.
> nvmeXn1
> -> NvmeXn1p1 ALIGN = 0
> -> md127 ALIGN = 0
> -> raid0luks2_dif ALIGN = 147456
> -> raid0luks2 ALIGN = 147456
> 1) How can I solve my alignment issue?
>
> 2) Is it normal to have low performances while writing with dd. I was
> using LUKS previously and I got only one dev mapper. Now I got 2. Does
> it have a big impact on performances?
>
> Kind regards.
dm-integrity already slows down writes about 3 times. At this situation,
trying to align accesses to raid stripe size doesn't make much sense.
If you want to improve performance, you may try to put two dm-ingerity
images directly on the SSDs and create raid-0 on the top of them. It may
perform better, but you'll have to benchmark it with the workload you are
optimizing for.
Mikulas
--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel