Re: move dm-integrity metadata to another PV

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 05, 2021 at 10:08:22AM -0500, David Teigland wrote:
> On Sun, Apr 04, 2021 at 08:18:56PM +0200, Sebastian Bachmann wrote:
> > Hello,
> > 
> > I played around with the new dm-integrity integration in lvm. Unfortunately, it
> > is very slow, as the checksums has to be written and read on - which is the
> > price to pay obviously.
> > 
> > I thought that it might be a good idea to move the metadata to a fast disk,
> > i.e., a SSD, however that is not possible and I get an error message on pvmove:
> > 
> > # pvmove -n r10_int_rimage_0_imeta /dev/sdd /dev/sda2
> >   Unable to pvmove device used for raid with integrity.
> > 
> > I could not find a reason why this should not be done in theory, thus I guess
> > that this is simply not supported by LVM right now?
> > Or is there another reason why you should keep the metadata always on the same
> > device?
> 
> The original implementation allowed a specific device, e.g. an ssd, to
> hold all the integrity metadata.  Integrity metadata for all raid images
> lived on one device, so there was some doubt that anyone would want to use
> it, and the option was dropped.  That option could also be used with
> linear+integrity.  Without raid, corrupt data found by integrity couldn't
> be recovered, so linear+integrity was also disabled.  Given those
> limitations, I'm curious how useful you'd find a dedicated integrity
> metadata device, and/or using integrity with a linear LV?

Ah okay, I did not knew that it was already considered.

So the worst case would be, that the SSD and an HDD gets corrupted at the same
time and you would not be able to recover the integrity data and would not
detect the corruption on the disk. For a RAID, you would still see that there is
a corruption but for linear you would not know that there was a corruption,
right?

Right now, I have a RAID without any integrity, thus enabling it on
another disk is in any case an improvement. I think loosing the SSD with the
integrity data is in that case not as bad as having slow reading/writing.

For the linear case, I would say it would be still useful. For example, you
would be able to know that something is wrong and could start to investigate.
I guess it would be a bad idea to put too much faith in that setup though.
How is this implemented for example on btrfs?

Maybe having a RAID of the metadata would be an option...

But I have not thought that through fully, so maybe there is something that I
missed.

Best,
Sebastian

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux