Il 2022-09-27 12:10 Roberto Fastec ha scritto:
questions 1. Given the premise 3. The corresponding LVM2 metadata/tables are and will be just a (allow me the term) "grid" "mapping that space" in an ordered sequence to in the subsequent use (and filling) of the RAID space "just mark" the used ones and the free ones? Or those grid cells will/could be in a messed order ?
Classical linear LVM volume (read: not lvmthin) are mostly concatenated 4MB-sized chunk of space, but this is not a given (especially if some volumes changed in size).
And explicitly I mean. In case of metadata corruption (always with respect of premise 3.) , could we just generate a dummy metadata table with all the extents marked as "used" in such a way that we can anyway access them
For linear volumes, one can try to setup a dmtable (or dummy metadata) to linearly read the data but, as stated above, this is far from reliable.
2. Does it exist a sort of "fsck" for the LVM2 metadata ? We do technical assistance and recently, specifically with those NAS devices that make use of LVM2, we have experienced really easy metadata corruption in occurence of just nothing or because of a electric power interruption (which is really astonishing). We mean no drives failures , no bad SMARTs . Just corruption from "nowhere" and "nocause"
For classical LVM, the metadata are actually backed up in ascii format unser /etc/lvm. While LVM itself keep a binary metadata representation, it also accept/store the textual so you can use the latter to restore your volumes.
Do you notice how I explicitly talked about *classical* volumes? This is because thin volumes (man lvmthin) use completely different, and much more complex, allocation strategies. Losing such metadata would kill the entire thin pool, and this is the reason a backup metadata volume is required for some operations. thincheck is effectively a sort ot "lvmthin fsck", but if you ever need to use it, be prepared to data loss (ranging from small to massive).
I saw various NAS that used custom-patched lvmthin volumes, and I suppose this is the root of your issues. If it is acceptable for your workload, try using classical LVM on these NAS.
Regards. -- Danti Gionatan Supporto Tecnico Assyoma S.r.l. - www.assyoma.it email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx GPG public key ID: FF5F32A8 _______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/