On 20/03/2017 10:51, Zdenek Kabelac wrote:
Please check upstream behavior (git HEAD)
It will still take a while before final release so do not use it
regularly yet (as few things still may change).
I will surely try with git head and report back here.
Not sure for which other comment you look for.
Zdenek
1. you suggested that a 128 MB metadata volume is "quite good" for a
512GB volume and 128KB chunkgs. However, my tests show that a near full
data volume (with *no* overprovisionig nor snapshots) will exhaust its
metadata *before* really becoming 100% full.
2. On a MD RAID with 64KB chunk size, things become much worse:
[root@gdanti-laptop test]# lvs -a -o +chunk_size
LV VG Attr LSize Pool Origin Data% Meta%
Move Log Cpy%Sync Convert Chunk
[lvol0_pmspare] vg_kvm ewi------- 128.00m
0
thinpool vg_kvm twi-a-tz-- 500.00g 0.00 1.58
64.00k
[thinpool_tdata] vg_kvm Twi-ao---- 500.00g
0
[thinpool_tmeta] vg_kvm ewi-ao---- 128.00m
0
root vg_system -wi-ao---- 50.00g
0
swap vg_system -wi-ao---- 3.75g
0
Thin metadata chunks are now at 64 KB - with the *same* 128 MB metadata
volume size. Now metadata can only address ~50% of thin volume space.
So, I am missing something or the RHEL 7.3-provided LVM has some serious
problems identifing correct metadata volume size when running on top of
a MD RAID device?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/