On Wed, Sep 10, 2014 at 07:32:20AM +0300, Leon Romanovsky wrote: > Hi Johaness, > > > On Tue, Sep 9, 2014 at 4:15 PM, Johannes Weiner <hannes@xxxxxxxxxxx> wrote: > > > The zone allocation batches can easily underflow due to higher-order > > allocations or spills to remote nodes. On SMP that's fine, because > > underflows are expected from concurrency and dealt with by returning > > 0. But on UP, zone_page_state will just return a wrapped unsigned > > long, which will get past the <= 0 check and then consider the zone > > eligible until its watermarks are hit. > > > > 3a025760fc15 ("mm: page_alloc: spill to remote nodes before waking > > kswapd") already made the counter-resetting use atomic_long_read() to > > accomodate underflows from remote spills, but it didn't go all the way > > with it. Make it clear that these batches are expected to go negative > > regardless of concurrency, and use atomic_long_read() everywhere. > > > > Fixes: 81c0a2bb515f ("mm: page_alloc: fair zone allocator policy") > > Reported-by: Vlastimil Babka <vbabka@xxxxxxx> > > Reported-by: Leon Romanovsky <leon@xxxxxxx> > > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx> > > Acked-by: Mel Gorman <mgorman@xxxxxxx> > > Cc: "3.12+" <stable@xxxxxxxxxx> > > --- > > mm/page_alloc.c | 7 +++---- > > 1 file changed, 3 insertions(+), 4 deletions(-) > > > > Sorry I forgot to CC you, Leon. Resend with updated Tags. > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 18cee0d4c8a2..eee961958021 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -1612,7 +1612,7 @@ again: > > } > > > > __mod_zone_page_state(zone, NR_ALLOC_BATCH, -(1 << order)); > > - if (zone_page_state(zone, NR_ALLOC_BATCH) == 0 && > > + if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0 && > > !zone_is_fair_depleted(zone)) > > zone_set_flag(zone, ZONE_FAIR_DEPLETED); > > > > @@ -5701,9 +5701,8 @@ static void __setup_per_zone_wmarks(void) > > zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp > > >> 1); > > > > __mod_zone_page_state(zone, NR_ALLOC_BATCH, > > - high_wmark_pages(zone) - > > - low_wmark_pages(zone) - > > - zone_page_state(zone, > > NR_ALLOC_BATCH)); > > + high_wmark_pages(zone) - low_wmark_pages(zone) - > > + atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])); > > > > setup_zone_migrate_reserve(zone); > > spin_unlock_irqrestore(&zone->lock, flags); > > > > I think the better way will be to apply Mel's patch > https://lkml.org/lkml/2014/9/8/214 which fix zone_page_state shadow casting > issue and convert all atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])) to > zone_page__state(zone, NR_ALLOC_BATCH). This move will unify access to > vm_stat. It's not as simple. The counter can go way negative and we need that negative number, not 0, to calculate the reset delta. As I said in response to Mel's patch, we could make the vmstat API signed but I'm not convinced that is reasonable, given the 99% majority of usecases. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>