Dne 17. 07. 24 v 13:42 Pickles Creator napsal(a):
Hi,
I'm currently evaluating using lvmcache with a local disk as a cache
for a slower network attached disk, both of these are SSDs. The
default value for `cache_pool_max_chunks` is 1,000,000 and I'm
wondering the about the reasoning behind this. Is there degradation if
the value is increased or has it not been tested?
With the current limit, a larger cache will be fairly inefficient with
smaller random reads due to large chunk size being allocated, and the
warmup of such a cache would take some time.
Hi
The basic idea is that users should not really be creating that large caches
as it may turn out less efficient to manipulate metadata of the cached volume.
Although if you have larger machine with a lot of RAM you can increase this
max value to be i.e. 8 times more.
Other way is to cache bigger chunks (i.e. 256K) - that should also reduce
burden on the metadata manipulation.
Another thing to note is - dm-cache is Hot Spot cache - if you would have
1.000.000 hot spots on your drive - maybe you may consider consider switching
to NVMe for the whole drive instead of caching it.
With all that said - if you have a lot of RAM - raising the limit of
cache_pool_max_chunks is the easy way forward - but it's most likely not the
best utilization of the caching resource (aka having 'chunks' in cache
which are there occupying the space for days/months without any use...)
Regards
Zdenek