On Mon, Feb 21, 2022 at 2:41 AM 'Petr Mladek' via kernel-team <kernel-team@xxxxxxxxxxx> wrote: > > On Mon 2022-02-21 09:55:12, Michal Hocko wrote: > > On Sat 19-02-22 09:49:40, Suren Baghdasaryan wrote: > > > When page allocation in direct reclaim path fails, the system will > > > make one attempt to shrink per-cpu page lists and free pages from > > > high alloc reserves. Draining per-cpu pages into buddy allocator can > > > be a very slow operation because it's done using workqueues and the > > > task in direct reclaim waits for all of them to finish before > > > proceeding. Currently this time is not accounted as psi memory stall. > > > > > > While testing mobile devices under extreme memory pressure, when > > > allocations are failing during direct reclaim, we notices that psi > > > events which would be expected in such conditions were not triggered. > > > After profiling these cases it was determined that the reason for > > > missing psi events was that a big chunk of time spent in direct > > > reclaim is not accounted as memory stall, therefore psi would not > > > reach the levels at which an event is generated. Further investigation > > > revealed that the bulk of that unaccounted time was spent inside > > > drain_all_pages call. > > > > It would be cool to have some numbers here. > > > > > Annotate drain_all_pages and unreserve_highatomic_pageblock during > > > page allocation failure in the direct reclaim path so that delays > > > caused by these calls are accounted as memory stall. > > > > If the draining is too slow and dependent on the current CPU/WQ > > contention then we should address that. The original intention was that > > having a dedicated WQ with WQ_MEM_RECLAIM would help to isolate the > > operation from the rest of WQ activity. Maybe we need to fine tune > > mm_percpu_wq. If that doesn't help then we should revise the WQ model > > and use something else. Memory reclaim shouldn't really get stuck behind > > other unrelated work. > > WQ_MEM_RECLAIM causes that one special worker (rescuer) is created for > the workqueue. It is used _only_ when new workers could not be created > for some, typically when there is non enough memory. It is just > a fallback, last resort. It does _not_ speedup processing. > > Otherwise, "mm_percpu_wq" is a normal CPU-bound wq. It uses the shared > per-CPU worker pools. They serialize all work items on a single > worker. Another worker is used only when a work goes asleep and waits > for something. > > It means that "drain" work is blocked by other work items that are > using the same worker pool and were queued earlier. Thanks for the valuable information! > > > You might try to allocate "mm_percpu_wq" with WQ_HIGHPRI flag. It will > use another shared per-CPU worker pools where the workers have nice > -20. The "drain" work still might be blocked by another work items > using the same pool. But it should be faster because the workers > have higher priority. This seems like a good first step to try. I'll make this change and rerun the tests to see how useful this would be. > > > Dedicated kthreads might be needed when the "draining" should not be > blocked by anything. If you go this way then I suggest to use > the kthread_worker API, see "linux/kthread.h". It is very similar > to the workqueues API but it always creates new kthreads. > > Just note that kthread_worker API does not maintain per-CPU workers > on its own. If you need per-CPU workers than you need to > use kthread_create_worker_on_cpu() for_each_online_cpu(). > And you would need cpu hotplug callbacks to create/destroy > ktheads. For example, see start_power_clamp_worker(). Got it. Let me try the WQ_HIGHPRI approach first. Let's see if we can fix this with minimal changes to the current mechanisms. Thanks, Suren. > > HTH, > Petr > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@xxxxxxxxxxx. >