On Tue 22-09-20 16:37:05, Vlastimil Babka wrote: > We currently call pageset_set_high_and_batch() for each possible cpu, which > repeats the same calculations of high and batch values. > > Instead call the function just once per zone, and make it apply the calculated > values to all per-cpu pagesets of the zone. > > This also allows removing the zone_pageset_init() and __zone_pcp_update() > wrappers. > > No functional change. > > Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx> > Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> > Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> I like this. One question below Acked-by: Michal Hocko <mhocko@xxxxxxxx> > --- > mm/page_alloc.c | 42 ++++++++++++++++++------------------------ > 1 file changed, 18 insertions(+), 24 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index a163c5e561f2..26069c8d1b19 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6219,13 +6219,14 @@ static void setup_pageset(struct per_cpu_pageset *p) > } > > /* > - * Calculate and set new high and batch values for given per-cpu pageset of a > + * Calculate and set new high and batch values for all per-cpu pagesets of a > * zone, based on the zone's size and the percpu_pagelist_fraction sysctl. > */ > -static void pageset_set_high_and_batch(struct zone *zone, > - struct per_cpu_pageset *p) > +static void zone_set_pageset_high_and_batch(struct zone *zone) > { > unsigned long new_high, new_batch; > + struct per_cpu_pageset *p; > + int cpu; > > if (percpu_pagelist_fraction) { > new_high = zone_managed_pages(zone) / percpu_pagelist_fraction; > @@ -6237,23 +6238,25 @@ static void pageset_set_high_and_batch(struct zone *zone, > new_high = 6 * new_batch; > new_batch = max(1UL, 1 * new_batch); > } > - pageset_update(&p->pcp, new_high, new_batch); > -} > - > -static void __meminit zone_pageset_init(struct zone *zone, int cpu) > -{ > - struct per_cpu_pageset *pcp = per_cpu_ptr(zone->pageset, cpu); > > - pageset_init(pcp); > - pageset_set_high_and_batch(zone, pcp); > + for_each_possible_cpu(cpu) { > + p = per_cpu_ptr(zone->pageset, cpu); > + pageset_update(&p->pcp, new_high, new_batch); > + } > } > > void __meminit setup_zone_pageset(struct zone *zone) > { > + struct per_cpu_pageset *p; > int cpu; > + > zone->pageset = alloc_percpu(struct per_cpu_pageset); > - for_each_possible_cpu(cpu) > - zone_pageset_init(zone, cpu); > + for_each_possible_cpu(cpu) { > + p = per_cpu_ptr(zone->pageset, cpu); > + pageset_init(p); > + } > + > + zone_set_pageset_high_and_batch(zone); I hope I am not misreading the diff but it seems that setup_zone_pageset is calling pageset_init which is then done again by zone_set_pageset_high_and_batch as a part of pageset_update -- Michal Hocko SUSE Labs