Why does thinpool take 2*poolmetadatasize space?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When I create an lvmthinpool with size S and poolmetadatasize P,
it reduces the available freespace by S+2P. I expected that to
be S+P. Where did the extra poolmetadatasize get used?

See below for example.
before lvcreate we had 255868 free, after we had 254588.
The difference is 1280.  (1024 + 2*128).

# lvm vgcreate --metadatasize=128m myvg0 /dev/vda
  Physical volume "/dev/vda" successfully created.
  Volume group "myvg0" successfully created

# pvs --unit=m
  PV         VG    Fmt  Attr PSize      PFree
  /dev/vda   myvg0 lvm2 a--  255868.00m 255868.00m

# vgs --unit=m
  VG    #PV #LV #SN Attr   VSize      VFree
  myvg0   1   0   0 wz--n- 255868.00m 255868.00m

# lvm lvcreate --ignoremonitoring --yes --activate=y \
   --setactivationskip=n --size=1024m --poolmetadatasize=128m \
   --thinpool=mythinpool myvg0
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of da
ta.
  Logical volume "mythinpool" created.

# vgs --unit=m
  VG    #PV #LV #SN Attr   VSize      VFree
  myvg0   1   1   0 wz--n- 255868.00m 254588.00m

# lvs --all --unit=m
  LV                 VG    Attr       LSize    Pool Origin Data%
Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]    myvg0 ewi-------  128.00m
  mythinpool         myvg0 twi-a-tz-- 1024.00m             0.00   10.03
  [mythinpool_tdata] myvg0 Twi-ao---- 1024.00m
  [mythinpool_tmeta] myvg0 ewi-ao----  128.00m

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux