On Wed, Mar 1, 2023 at 4:50 PM Roy Sigurd Karlsbakk <roy@xxxxxxxxxxxxx> wrote: > > Hi all > > Working with a friend's machine, it has lvmcache turned on with writeback. This has worked well, but now it's uncaching and it takes *hours*. The amount of cache was chosen to 100GB on an SSD not used for much else and the dataset that is being cached, is a RAID-6 set of 10x2TB with XFS on top. The system mainly works with file serving, but also has some VMs that benefit from the caching quite a bit. But then - I wonder - how can it spend hours emptying the cache like this? Most write caching I know of last only seconds or perhaps in really worst case scenarios, minutes. Since this is taking hours, it looks to me something should have been flushed ages ago. > > Have I (or we) done something very stupid here or is this really how it's supposed to work? > > Vennlig hilsen > > roy A spinning raid6 array is slow on writes (see raid6 write penalty). Because of that the array can only do about 100 write operattions/sec. If the disk is doing other work then it only has the extra capacity so it could destage slower. A lot depends on how big each chunk is. The lvmcache indicates the smallest chunksize is 32k. 100G / 32k = 3 million, and at 100seeks/sec that comes to at least an hour. Lvm bookkeeping has to also be written to the spinning disks I would think, so 2 hours if the array were idle. Throw in a 50% baseload on the disks and you get 4 hours. Hours is reasonable. _______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://listman.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/