On Mon, Mar 7, 2022 at 9:24 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > On Mon, Mar 7, 2022 at 9:04 AM 'Michal Hocko' via kernel-team > <kernel-team@xxxxxxxxxxx> wrote: > > > > On Thu 24-02-22 17:28:19, Suren Baghdasaryan wrote: > > > Sending as an RFC to confirm if this is the right direction and to > > > clarify if other tasks currently executed on mm_percpu_wq should be > > > also moved to kthreads. The patch seems stable in testing but I want > > > to collect more performance data before submitting a non-RFC version. > > > > > > > > > Currently drain_all_pages uses mm_percpu_wq to drain pages from pcp > > > list during direct reclaim. The tasks on a workqueue can be delayed > > > by other tasks in the workqueues using the same per-cpu worker pool. > > > This results in sizable delays in drain_all_pages when cpus are highly > > > contended. > > > > This is not about cpus being highly contended. It is about too much work > > on the WQ context. > > Ack. > > > > > > Memory management operations designed to relieve memory pressure should > > > not be allowed to block by other tasks, especially if the task in direct > > > reclaim has higher priority than the blocking tasks. > > > > Agreed here. > > > > > Replace the usage of mm_percpu_wq with per-cpu low priority FIFO > > > kthreads to execute draining tasks. > > > > This looks like a natural thing to do when WQ context is not suitable > > but I am not sure the additional resources is really justified. Large > > machines with a lot of cpus would create a lot of kernel threads. Can we > > do better than that? > > > > Would it be possible to have fewer workers (e.g. 1 or one per numa node) > > and it would perform the work on a dedicated cpu by changing its > > affinity? Or would that introduce an unacceptable overhead? > > Not sure but I can try implementing per-node kthreads and measure the > performance of the reclaim path, comparing with the current and with > per-cpu approach. Just to update on this RFC. In my testing I don't see a meaningful improvement from using the kthreads yet. This might be due to my test setup, so I'll keep exploring. Will post the next version only if I get demonstrable improvements. Thanks! > > > > > Or would it be possible to update the existing WQ code to use rescuer > > well before the WQ is completely clogged? > > -- > > Michal Hocko > > SUSE Labs > > > > -- > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxx. > >