Re: Possible bug in thin metadata size with Linux MDRAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 20.3.2017 v 11:45 Gionatan Danti napsal(a):
On 20/03/2017 10:51, Zdenek Kabelac wrote:

Please check upstream behavior (git HEAD)
It will still take a while before final release so do not use it
regularly yet (as few things still may  change).

I will surely try with git head and report back here.


Not sure for which other comment you look for.

Zdenek




1. you suggested that a 128 MB metadata volume is "quite good" for a 512GB
volume and 128KB chunkgs. However, my tests show that a near full data volume
(with *no* overprovisionig nor snapshots) will exhaust its metadata *before*
really becoming 100% full.

2. On a MD RAID with 64KB chunk size, things become much worse:
[root@gdanti-laptop test]# lvs -a -o +chunk_size
  LV               VG        Attr       LSize   Pool Origin Data% Meta%
Move Log Cpy%Sync Convert Chunk
  [lvol0_pmspare]  vg_kvm    ewi------- 128.00m
                                0
  thinpool         vg_kvm    twi-a-tz-- 500.00g             0.00   1.58
                            64.00k
  [thinpool_tdata] vg_kvm    Twi-ao---- 500.00g
                                0
  [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m
                                0
  root             vg_system -wi-ao----  50.00g
                                0
  swap             vg_system -wi-ao----   3.75g
                                0

Thin metadata chunks are now at 64 KB - with the *same* 128 MB metadata
volume size. Now metadata can only address ~50% of thin volume space.

So, I am missing something or the RHEL 7.3-provided LVM has some serious
problems identifing correct metadata volume size when running on top of a MD
RAID device?


As said - please try with HEAD - and report back if you still see a problem.
There were couple issue fixed along this path.

In my test it seems  500G needs at least 258M with 64K chunksize.

On the other hand - it's never been documented that thin-pool without monitoring is supposed to fit single LV AFAIK - it's basically needed that user knows what he is using when he uses thin-provisioning - but of course we continuously try to improve things to be more usable.

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux