Re: lvm2 deadlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 04. 06. 24 v 13:52 Jaco Kroon napsal(a):
Hi,

On 2024/06/04 12:48, Roger Heflin wrote:

Use the *_bytes values.  If they are non-zero then they are used and
that allows setting even below 1% (quite large on anything with a lot
of ram).

I have been using this for quite a while:
vm.dirty_background_bytes = 3000000
vm.dirty_bytes = 5000000


What I am noticing immediately is that the "free" value as per "free -m" is definitely much higher, which to me is indicative that we're not caching as aggressively as can be done.  Will monitor this for the time being:

crowsnest [13:50:09] ~ # free -m
                total        used        free      shared buff/cache available
Mem:          257661        6911      105313           7 145436      248246
Swap:              0           0           0

The Total DISK WRITE and Current DISK Write values in in iotop seems to have a tighter correlation now (no longer seeing constant Total DISK WRITE with spikes in current, seems to be more even now).

Hi

So now while we are solving various system setting - there are more things to think through.

The big 'range' of unwritten data may put them in risk for the 'power' failure.
On the other hand large 'dirty pages' allows system to 'optimize' and even bypass storing them on disk if they are frequently changed - so in this case 'lower' dirty ration may cause significant performance impact - so please check whats the typical workload and what is result...

It's worth to mention lvm2 support writecache target to kind of offload dirty pages to fast storage...

Last but not least - disk scheduling policies also do have impact - to i.e. ensure better fairness - at the prices of lower throughput...

So now let's get back to lvm2 'possible' deadlock - which I'm still not fully certain we deciphered in this thread yet.

So if you happen to 'spot' stuck commands - do you notice anything strange in systemd journal - usually when systemd decides to kill udevd worker task - it's briefly stated in journal - with this check we would kind of know that reason of your problems was killed worked that was not able to 'finalize' lvm command which is waiting for confirmation from udev (currently without any timeout limits).

To unstuck such command 'udevcomplete_all' is a cure - but as said - the system is already kind of 'damaged' since udev is failing and has 'invalid' information about devices...

So maybe you could check whether your journal around date&time of problem has some 'interesting' 'killing action' record ?

Regards

Zdenek




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux