On Thu, Nov 30, 2023 at 04:40:00PM -0800, Eric Wheeler wrote: > integritysetup format $idev0 --integrity xxhash64 --batch-mode > integritysetup format $idev1 --integrity xxhash64 --batch-mode > > integritysetup open --integrity xxhash64 --allow-discards $idev0 dm-integrity0 > integritysetup open --integrity xxhash64 --allow-discards $idev1 dm-integrity1 > > mdadm --create $MD_DEV --metadata=1.2 --assume-clean --level=1 --raid-devices=2 /dev/mapper/dm-integrity[01] > > # 1. This should be enough to trigger it: > SSD_DEV=$MD_DEV > > # 2. If not, then wrap /dev/md9 in a linear target: > #linear_add ssd $MD_DEV > #SSD_DEV=/dev/mapper/ssd Interesting that just adding a linear layer there would have some effect. > # Create a writable header for the PV meta: > dd if=/dev/zero bs=1M count=16 oflag=direct of=/tmp/pvheader > loop=`losetup -f --show /tmp/pvheader` > linear_add pv $loop /dev/nullb0 > > # Create the VG > lvmdevices --adddev $SSD_DEV > lvmdevices --adddev /dev/mapper/pv > vgcreate $VGNAME /dev/mapper/pv $SSD_DEV > > # Create the pool: > lvcreate -n pool0 -L 1T $VGNAME /dev/mapper/pv > lvcreate -n meta0 -L 512m $VGNAME $SSD_DEV > > # Make sure the meta volume is on the SSD (it should be already from above): > pvmove -n meta0 /dev/mapper/pv I'd omit that pvmove if possible just in case it makes some unexpected change. You have more than enough layers to complicate things as it is without pvmove adding dm-mirror to the mix. > lvconvert -y --force --force --chunksize 64k --type thin-pool --poolmetadata $VGNAME/meta0 $VGNAME/pool0 It's not a bad idea to use mdraid over dm-integrity, but it would be interesting to know if doing raid+integrity in lvm would have the same problems. e.g. lvcreate --type raid1 --raidintegrity y -m1 -L 512m -n meta0 $vg /dev/ram[01] lvcreate -n pool0 -L 1T $vg /dev/nullb0 lvconvert --type thin-pool --poolmetadata meta0 $vg/pool0 Dave