On Tue 21-07-20 15:17:49, Chris Down wrote: > I understand the pragmatic considerations here, but I'm quite concerned > about the maintainability and long-term ability to reason about a patch like > this. For example, how do we know when this patch is safe to remove? Also, > what other precedent does this set for us covering for poor userspace > behaviour? > > Speaking as a systemd maintainer, if udev could be doing something better on > these machines, we'd be more than receptive to help fix it. In general I am > against explicit watchdog tweaking here because a.) there's potential to > mask other problems, and b.) it seems like the kind of one-off trivia nobody > is going to remember exists when doing complex debugging in future. > > Is there anything preventing this being remedied in udev, instead of the > kernel? Yes, I believe that there is a configuration to cap the maximum number of workers. This is not my area but my understanding is that the maximum is tuned based on available memory and/or cpus. We have been hit byt this quite heavily on SLES. Maybe newer version of systemd have a better tuning. But, it seems that udev is just a messenger here. There is nothing really fundamentally udev specific in the underlying problem unless I miss something. It is quite possible that this could be triggered by other userspace which happens to fire many workers at the same time and condending on a shared page. Not that I like this workaround in the first place but it seems that the existing code allows very long wait chains and !PREEMPT kernels simply do not have any scheduling point for a long time potentially. I believe we should focus on that even if the systemd as the current trigger can be tuned better. I do not insist on this patch, hence RFC, but I am simply not seeing a much better, yet not convoluted, solution. -- Michal Hocko SUSE Labs