Dne 27.3.2018 v 09:44 Gionatan Danti napsal(a):
What am I missing? Is the "data%" field a measure of how many data chunks are
allocated, or does it even track "how full" are these data chunks? This would
benignly explain the observed discrepancy, as a partially-full data chunks can
be used to store other data without any new metadata allocation.
Hi
I've forget to mention there is "thin_ls" tool (comes with
device-mapper-persistent-data package (with thin_check) - for those who want
to know precise amount of allocation and what amount of blocks is owned
exclusively by a single thinLV and what is shared.
It's worth to note - numbers printed by 'lvs' are *JUST* really rough
estimations of data usage for both thin_pool & thin_volumes.
Kernel is not maintaining full data-set - only a needed portion of it - and
since 'detailed' precise evaluation is expensive it's deferred to the tool
thin_ls...
And last but not least comment - when you pointed out 4MB extent usage - it's
relatively huge chunk - and if the 'fstrim' wants to succeed - those 4MB
blocks fitting thin-pool chunks needs to be fully released.
So i.e. if there are some 'sparse' filesystem metadata blocks places - they
may prevent TRIM to successeed - so while your filesystem may have a lot of
free space for its data - the actually amount if physically trimmed space can
be much much smaller.
So beware if the 4MB chunk-size for a thin-pool is good fit here....
The smaller the chunk is - the better change of TRIM there is...
For heavily fragmented XFS even 64K chunks might be a challenge....
Regards
Zdenek
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/