On Tue, Jul 23, 2019 at 4:12 PM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Tue 23-07-19 04:08:15, Yafang Shao wrote: > > This is the follow-up of the > > commit "mm/compaction.c: clear total_{migrate,free}_scanned before scanning a new zone". > > > > These counters are used to track activities during compacting a zone, > > and they will be set to zero before compacting a new zone in all compact > > paths. Move all these common settings into compact_zone() for better > > management. A new helper compact_zone_counters_init() is introduced for > > this purpose. > > The helper seems excessive a bit because we have a single call site but > other than that this is an improvement to the current fragile and > duplicated code. > Understood. > I would just get rid of the helper and squash it to your previous patch > which Andrew already took to the mm tree. > I appreciate it. Thanks Yafang > > Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx> > > Cc: Michal Hocko <mhocko@xxxxxxxx> > > Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > > Cc: Yafang Shao <shaoyafang@xxxxxxxxxxxxxx> > > --- > > mm/compaction.c | 28 ++++++++++++++-------------- > > 1 file changed, 14 insertions(+), 14 deletions(-) > > > > diff --git a/mm/compaction.c b/mm/compaction.c > > index a109b45..356348b 100644 > > --- a/mm/compaction.c > > +++ b/mm/compaction.c > > @@ -2065,6 +2065,19 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order, > > return false; > > } > > > > + > > +/* > > + * Bellow counters are used to track activities during compacting a zone. > > + * Before compacting a new zone, we should init these counters first. > > + */ > > +static void compact_zone_counters_init(struct compact_control *cc) > > +{ > > + cc->total_migrate_scanned = 0; > > + cc->total_free_scanned = 0; > > + cc->nr_migratepages = 0; > > + cc->nr_freepages = 0; > > +} > > + > > static enum compact_result > > compact_zone(struct compact_control *cc, struct capture_control *capc) > > { > > @@ -2075,6 +2088,7 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order, > > const bool sync = cc->mode != MIGRATE_ASYNC; > > bool update_cached; > > > > + compact_zone_counters_init(cc); > > cc->migratetype = gfpflags_to_migratetype(cc->gfp_mask); > > ret = compaction_suitable(cc->zone, cc->order, cc->alloc_flags, > > cc->classzone_idx); > > @@ -2278,10 +2292,6 @@ static enum compact_result compact_zone_order(struct zone *zone, int order, > > { > > enum compact_result ret; > > struct compact_control cc = { > > - .nr_freepages = 0, > > - .nr_migratepages = 0, > > - .total_migrate_scanned = 0, > > - .total_free_scanned = 0, > > .order = order, > > .search_order = order, > > .gfp_mask = gfp_mask, > > @@ -2418,10 +2428,6 @@ static void compact_node(int nid) > > if (!populated_zone(zone)) > > continue; > > > > - cc.nr_freepages = 0; > > - cc.nr_migratepages = 0; > > - cc.total_migrate_scanned = 0; > > - cc.total_free_scanned = 0; > > cc.zone = zone; > > INIT_LIST_HEAD(&cc.freepages); > > INIT_LIST_HEAD(&cc.migratepages); > > @@ -2526,8 +2532,6 @@ static void kcompactd_do_work(pg_data_t *pgdat) > > struct compact_control cc = { > > .order = pgdat->kcompactd_max_order, > > .search_order = pgdat->kcompactd_max_order, > > - .total_migrate_scanned = 0, > > - .total_free_scanned = 0, > > .classzone_idx = pgdat->kcompactd_classzone_idx, > > .mode = MIGRATE_SYNC_LIGHT, > > .ignore_skip_hint = false, > > @@ -2551,10 +2555,6 @@ static void kcompactd_do_work(pg_data_t *pgdat) > > COMPACT_CONTINUE) > > continue; > > > > - cc.nr_freepages = 0; > > - cc.nr_migratepages = 0; > > - cc.total_migrate_scanned = 0; > > - cc.total_free_scanned = 0; > > cc.zone = zone; > > INIT_LIST_HEAD(&cc.freepages); > > INIT_LIST_HEAD(&cc.migratepages); > > -- > > 1.8.3.1 > > -- > Michal Hocko > SUSE Labs