Re: strange usage stats for thin LV

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 31.10.2012 01:04, Andres Toomsalu napsal(a):
Hi,

I'm a bit puzzled with some thin LV usage stats - hope that someone can shed a light on this.
lvs shows that thin_backup LV is 94% used - but df  shows only 16% - where comes the difference?

lvs -a -o+metadata_percent
   LV                       VG         Attr     LSize   Pool Origin       Data%  Move Log Copy%  Convert Meta%
   pool                     VolGroupL0 twi-a-tz   1,95t                    35,17                           2,79
   [pool_tdata]             VolGroupL0 Twi-aot-   1,95t
   [pool_tmeta]             VolGroupL0 ewi-aot-  14,00g
   root                     VolGroupL0 -wi-ao--  10,00g
   swap                     VolGroupL0 -wi-ao--  16,00g
   thin_backup              VolGroupL0 Vwi-aotz 600,00g pool               94,51
   thin_storage             VolGroupL0 Vwi-aotz 600,00g pool               20,98


df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroupL0-root
                       9,9G  1,3G  8,1G  14% /
tmpfs                  16G     0   16G   0% /dev/shm
/dev/sda1            1008M  122M  835M  13% /boot
/dev/mapper/VolGroupL0-thin_storage
                       591G   39G  523G   7% /storage
/dev/mapper/VolGroupL0-thin_backup
                       591G   90G  472G  16% /backup

Thanks in advance,



As Stuart posted values are not related closely together.
But there are few things which are visible:

~35% tells you the number of used space in the pool - around ~700GB
~3% metadata takes - ~400MB

thin_backup has provisioned ~95%   ->  ~570GB
thin_storage                ~21%   ->  ~130GB

which seem to match approximately number of used blocks from the pool
(~570 + ~130 = ~700)

===

Now to interpret your 'df' stats:

thin_storage uses 39GB  stored in provisioned 130GB
thin_backup  uses 90GB  stored in provisioned 570GB

and there could be multi reasons for this:

- usage of large chunksize - and filesystem spreads a lot of data though the device - either for it's internal maintenance, or a lot of files are located
across whole provisioned space.
- You have delete lots of files - and have not used discard for deleted areas
(i.e. for ext4 there is  'fstrim' command which will discard them)


So here you need to provide more information which filesystem is in use,
and what was the overall usage for your devices. And also are you using discard support or not ?
What is the kernel version in use?
(It's always worth to use latest version of lvm2 -  since there was improved
discard support configurability.

Zdenek



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux