On Wed 01-04-20 16:08:55, Daniel Jordan wrote: [...] > From: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> > Date: Fri, 27 Mar 2020 17:29:05 -0400 > Subject: [PATCH] mm: call touch_nmi_watchdog() on max order boundaries in > deferred init > > deferred_init_memmap() disables interrupts the entire time, so it calls > touch_nmi_watchdog() periodically to avoid soft lockup splats. Soon it > will run with interrupts enabled, at which point cond_resched() should > be used instead. > > deferred_grow_zone() makes the same watchdog calls through code shared > with deferred init but will continue to run with interrupts disabled, so > it can't call cond_resched(). > > Pull the watchdog calls up to these two places to allow the first to be > changed later, independently of the second. The frequency reduces from > twice per pageblock (init and free) to once per max order block. This makes sense but I am not really sure this is necessary for the stable backport. > Signed-off-by: Daniel Jordan <daniel.m.jordan@xxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> > --- > mm/page_alloc.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 212734c4f8b0..4cf18c534233 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1639,7 +1639,6 @@ static void __init deferred_free_pages(unsigned long pfn, > } else if (!(pfn & nr_pgmask)) { > deferred_free_range(pfn - nr_free, nr_free); > nr_free = 1; > - touch_nmi_watchdog(); > } else { > nr_free++; > } > @@ -1669,7 +1668,6 @@ static unsigned long __init deferred_init_pages(struct zone *zone, > continue; > } else if (!page || !(pfn & nr_pgmask)) { > page = pfn_to_page(pfn); > - touch_nmi_watchdog(); > } else { > page++; > } > @@ -1813,8 +1811,10 @@ static int __init deferred_init_memmap(void *data) > * that we can avoid introducing any issues with the buddy > * allocator. > */ > - while (spfn < epfn) > + while (spfn < epfn) { > nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); > + touch_nmi_watchdog(); > + } > zone_empty: > pgdat_resize_unlock(pgdat, &flags); > > @@ -1908,6 +1908,7 @@ deferred_grow_zone_locked(pg_data_t *pgdat, struct zone *zone, > first_deferred_pfn = spfn; > > nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); > + touch_nmi_watchdog(); > > /* We should only stop along section boundaries */ > if ((first_deferred_pfn ^ spfn) < PAGES_PER_SECTION) > -- > 2.25.0 > -- Michal Hocko SUSE Labs