Re: lvmcache not promoting blocks when there's free RAM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 19. 12. 23 v 23:28 Wolf480pl napsal(a):
Hello,

I have an x86-64 PC acting as a NAS, it has 2x 3TB HDDs in md(4) RAID1,
with LVM on top, and LVs for various types of data. It also has
an NVME SSD with rootfs and 16GB of RAM. I spin down the HDDs to
minimize idle power draw, and to make them spin up less often I tried
to use lvmcache for one of the LVs, with cache volume on the SSD.

However, barely anything gets cached in the lvmcache. Blocks aren't
getting promoted (based on dmsetup status), despite the cache volume
being mostly empty.

Hi


dm-cache target (ATM) is focusing on figuring out 'hotspot' areas of your disk readings.

So it doesn't cache what is currently 'present' and satisfied from page-cache.

So normally to promote the disk area (cache chunk) to be promoted to the cache it needs to be repeatedly physically accessed from your origin device.

So on a 'naive' example - if you read file A - and then such file is being held by page cache and you just read data from this file while page cache still keeps it in RAM - this will not 'rise' access counter on block level.

There are some 'requests' to accelerate a promotion of blocks to the empty cache where some of them have been upstream - but overal the 'dm-cache' is not designed as 'page-cache' like layer on top of your HDD - it really is focusing on getting there blocks frequently accessed and it take some time to populate the cache with 'worthy' content.


When a file is read for the first time, it gets cached in RAM
by page cache, but not on SSD by lvmcache. Subsequent reads never hit
the block layer, because the data is already in RAM, until I need to
reboot the machine. After a reboot, when something tries to read
that file again, HDDs will have to be spun up again, because it never
got promoted to SSD by lvmcache.

I looked into dm-cache's smq policy code[1], and it looks like
for a block to be promoted to the cache volume, it needs to be read
at least twice:

- first read - block gets added to the bottom of the hotspot queue
- some time passes, queue tick triggers a redistribute, some blocks
   get moved to the top of the queue
- second read - block is found to be on top of the queue, gets promoted

Unfortunately in my case the second read never comes.

As said - the dm-cache is not targeted to solve this problem.

I have a few ideas on how this could be fixed in the kernel, but
I don't know if such patches would be welcome or where to reach
device-mapper developers to ask them about it.

It would be more like a new target.

It might be worth to check dm-writecache - which is solving opposite issue - how to efficiently writeback dirty pagecache.

Regards

Zdenek





[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux