Higher than expected metadata usage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I can't wrap my head on the following reported data vs metadata usage before/after a snapshot deletion.

System is an updated CentOS 7.4 x64

BEFORE SNAP DEL:
[root@ ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 000-ThinPool vg_storage twi-aot--- 7.21t 80.26 56.88 Storage vg_storage Vwi-aot--- 7.10t 000-ThinPool 76.13
  ZZZSnap      vg_storage Vwi---t--k  7.10t 000-ThinPool Storage

As you can see, a ~80% full data pool resulted in a ~57% metadata usage

AFTER SNAP DEL:
[root@ ~]# lvremove vg_storage/ZZZSnap
  Logical volume "ZZZSnap" successfully removed
[root@ ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 000-ThinPool vg_storage twi-aot--- 7.21t 74.95 36.94
  Storage      vg_storage Vwi-aot---  7.10t 000-ThinPool        76.13

Now data is at ~75 (5% lower), but metadata is at only ~37%: a whopping 20% metadata difference for a mere 5% data freed.

This was unexpected: I thought there was a more or less linear relation between data and metadata usage as, after all, the first is about allocated chunks tracked by the latter. I know that snapshots pose additional overhead on metadata tracking, but based on previous tests I expected this overhead to be much smaller. In this case, we are speaking about a 4X amplification for a single snapshot. This is concerning because I want to *never* run out of metadata space.

If it can help, just after taking the snapshot I sparsified some file on the mounted filesystem, *without* fstrimming it (so, from lvmthin standpoint, nothing changed on chunk allocation).

What am I missing? Is the "data%" field a measure of how many data chunks are allocated, or does it even track "how full" are these data chunks? This would benignly explain the observed discrepancy, as a partially-full data chunks can be used to store other data without any new metadata allocation.

Full LVM information:

[root@ ~]# lvs -a -o +chunk_size
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Chunk 000-ThinPool vg_storage twi-aot--- 7.21t 74.95 36.94 4.00m [000-ThinPool_tdata] vg_storage Twi-ao---- 7.21t 0 [000-ThinPool_tmeta] vg_storage ewi-ao---- 116.00m 0 Storage vg_storage Vwi-aot--- 7.10t 000-ThinPool 76.13 0 [lvol0_pmspare] vg_storage ewi------- 116.00m 0

Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux