On Wed, Feb 23, 2022 at 11:43 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > On Wed, Feb 23, 2022 at 11:40 AM Suren Baghdasaryan <surenb@xxxxxxxxxx> wrote: > > > > When page allocation in direct reclaim path fails, the system will > > make one attempt to shrink per-cpu page lists and free pages from > > high alloc reserves. Draining per-cpu pages into buddy allocator can > > be a very slow operation because it's done using workqueues and the > > task in direct reclaim waits for all of them to finish before > > proceeding. Currently this time is not accounted as psi memory stall. > > > > While testing mobile devices under extreme memory pressure, when > > allocations are failing during direct reclaim, we notices that psi > > events which would be expected in such conditions were not triggered. > > After profiling these cases it was determined that the reason for > > missing psi events was that a big chunk of time spent in direct > > reclaim is not accounted as memory stall, therefore psi would not > > reach the levels at which an event is generated. Further investigation > > revealed that the bulk of that unaccounted time was spent inside > > drain_all_pages call. > > > > A typical captured case when drain_all_pages path gets activated: > > > > __alloc_pages_slowpath took 44.644.613ns > > __perform_reclaim took 751.668ns (1.7%) > > drain_all_pages took 43.887.167ns (98.3%) > > > > PSI in this case records the time spent in __perform_reclaim but > > ignores drain_all_pages, IOW it misses 98.3% of the time spent in > > __alloc_pages_slowpath. > > > > Annotate __alloc_pages_direct_reclaim in its entirety so that delays > > from handling page allocation failure in the direct reclaim path are > > accounted as memory stall. > > > > Reported-by: Tim Murray <timmurray@xxxxxxxxxx> > > Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx> > > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> > > --- > > changes in v2: > > - Added captured sample case to show the delay numbers, per Michal Hocko > > - Moved annotation from __perform_reclaim into __alloc_pages_direct_reclaim, > > per Minchan Kim > > > > mm/page_alloc.c | 11 ++++++----- > > 1 file changed, 6 insertions(+), 5 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 3589febc6d31..2e9fbf28938f 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -4595,13 +4595,12 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order, > > const struct alloc_context *ac) > > { > > unsigned int noreclaim_flag; > > - unsigned long pflags, progress; > > + unsigned long progress; > > > > cond_resched(); > > > > /* We now go into synchronous reclaim */ > > cpuset_memory_pressure_bump(); > > - psi_memstall_enter(&pflags); > > fs_reclaim_acquire(gfp_mask); > > noreclaim_flag = memalloc_noreclaim_save(); > > > > @@ -4610,7 +4609,6 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order, > > > > memalloc_noreclaim_restore(noreclaim_flag); > > fs_reclaim_release(gfp_mask); > > - psi_memstall_leave(&pflags); > > > > cond_resched(); > > > > @@ -4624,11 +4622,13 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > > unsigned long *did_some_progress) > > { > > struct page *page = NULL; > > + unsigned long pflags; > > bool drained = false; > > > > + psi_memstall_enter(&pflags); > > *did_some_progress = __perform_reclaim(gfp_mask, order, ac); > > if (unlikely(!(*did_some_progress))) > > - return NULL; > > + goto out; > > > > retry: > > page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); > > @@ -4644,7 +4644,8 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > > drained = true; > > goto retry; > > } > > - > > + psi_memstall_leave(&pflags); > > Oh, psi_memstall_leave should have been *after* the "out" label. Will > fix and repost. Fixed in v3: https://lore.kernel.org/all/20220223194812.1299646-1-surenb@xxxxxxxxxx/ > > > +out: > > return page; > > } > > > > -- > > 2.35.1.473.g83b2b277ed-goog > >