Single lvmcache device for multiple LVs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have multiple LVs in a VG:

  LV               VG       Attr       LSize    Pool     Origin  Data%  Meta%  Move Log Cpy%Sync Convert
  cyrus_spool      centos   Vwi-aotz--   29.80g pool00           95.84                                  
  cyrus_spool.old  centos   Vwi-a-tz--   22.18g pool00           98.91                                  
  data             centos   Vwi-aotz--   25.00g pool00           77.88                                  
  debuginfo        centos   -wi-ao----    4.73g                                                         
  home             centos   Vwi-aotz--   10.00g pool00           86.77                                  
  home.old         centos   Vwi-a-tz--   10.00g pool00           99.92                                  
  http_cache       centos   Vwi-aotz--   50.00g pool00           72.10                                  
  mp3              centos   Vwi-aotz--  150.00g pool00           84.37                                  
  nextcloud_data   centos   Vwi-aotz--  250.00g pool00           71.25                                  
  photos           centos   -wi-ao----   64.49g                                                         
  pkg-cache        centos   Vwi-aotz--   47.68g pool00           81.41                                  
  pool00           centos   twi-aotz--    1.26t                  75.81  35.35                           
  root             centos   Vwi-aotz--   28.81g pool00           16.68                                  
  snaptest         centos   Vwi---tz-k    1.46g pool00   root                                           
  source           centos   Vwi-aotz--   92.00g pool00           92.31                                  
  swap             centos   Vwi-aotz--    8.00g pool00           100.00                                 
  swap2            centos   -wi-a-----   16.00g                                                         
  test             centos   -wi-a-----  100.00m                                                         
  usr              centos   Vwi-aotz--    9.25g pool00           77.97                                  
  var              centos   Vwi-aotz--  265.45g pool00           92.25                                  
  windows10        centos   -wi-a-----    6.00g

that I want to cache with a faster 120GB SSD.  Not all of those LVs are
even actively used, and some are rarely used, but I would think any
decent caching algorithm would work out the most used LVs and their
hotspots so trying to cherry-pick which ones to cache and which not to
shouldn't even be an issue.

But as I understand it, I need a separate caching LV (for cachevol) or
two (for cachepool) for each LV that I want to cache.

This seems rather sub-optimal in trying to "guess" what portion of the
120GB SSD I should use for each LV that I want to cache, and trying to
guess which LVs would be most [in]effective to cache even.

Is there no better way to do this than 1:1 cache{pool|vol} per LV?

I suppose I could take a WAG and then after a while evaluate the stats
and decide (manually) if some re-balancing of the portioning would be
beneficial, and then keep doing that periodically but that seems rather
tedious and manual.  Any solutions here to automate this, as a
workaround to this 1:1 LV requirement?

Alternatively, if it must be 1:1 would attaching the cache to the thin-
pool device, pool00 effect what I am trying to achieve?  Is that even
possible?

Is lvmcache maybe just not the right solution in this case?

Cheers,
b.






[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux