On 2020/1/3 下午11:14, Michal Koutný wrote: > Hi. > > On Fri, Dec 13, 2019 at 09:47:36AM +0800, 王贇 <yun.wang@xxxxxxxxxxxxxxxxx> wrote: >> By monitoring the increments, we will be able to locate the per-cgroup >> workload which NUMA Balancing can't helpwith (usually caused by wrong >> CPU and memory node bindings), then we got chance to fix that in time. > I just wonder do the data based on increments match with those you > obtained previously? They have different meaning, since now it's just the accumulation of local/remote page access counter, we have to increasing the sample period into the maximum NUMA balancing scan period, to my system it's 1 minute. We still get useful information from the increments, for example: local 100 remote 1000 <-- bad locality in last period local 0 remote 0 <-- no scan or NUMA PF happened in last period local 100 remote 0 <-- good locality but not much PF happened So I won't say they are matched, they tell the story in different way :-P > >> +static inline void >> +update_task_locality(struct task_struct *p, int pnid, int cnid, int pages) >> +{ >> + if (!static_branch_unlikely(&sched_numa_locality)) >> + return; >> + >> + /* >> + * pnid != cnid --> remote idx 0 >> + * pnid == cnid --> local idx 1 >> + */ >> + p->numa_page_access[!!(pnid == cnid)] += pages; > If the per-task information isn't used anywhere, why not accumulate > directly into task's cfs_rq->{local,remote}_page_access? > This is try to avoid hierarchy update in each PF, accumulate the counter and update together should cost less. Besides, as they won't be reset now, maybe we could expose them too. >> @@ -4298,6 +4359,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued) >> */ >> update_load_avg(cfs_rq, curr, UPDATE_TG); >> update_cfs_group(curr); >> + update_group_locality(cfs_rq); > With the per-NUMA node time tracked separately, isn't it unnecessary > doing group updates inside entity_tick? The hierarchy update can't be saved, and this is a good place where we already holding rq lock, iterate cfs_rq in hierarchy for current task. Regards, Michael Wang > > > Regards, > Michal >