The patch titled memcg: remove unneeded preempt_disable has been removed from the -mm tree. Its filename was memcg-remove-unneeded-preempt_disable.patch This patch was dropped because an updated version will be merged The current -mm tree may be found at http://userweb.kernel.org/~akpm/mmotm/ ------------------------------------------------------ Subject: memcg: remove unneeded preempt_disable From: Greg Thelen <gthelen@xxxxxxxxxx> Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were unnecessarily disabling preemption when adjusting per-cpu counters: preempt_disable() __this_cpu_xxx() __this_cpu_yyy() preempt_enable() This change does not disable preemption and thus CPU switch is possible within these routines. This does not cause a problem because the total of all cpu counters is summed when reporting stats. Now both mem_cgroup_charge_statistics() and mem_cgroup_move_account() look like: this_cpu_xxx() this_cpu_yyy() akpm: this is an optimisation for x86 and a deoptimisation for non-x86. The non-x86 situation will be fixed as architectures implement their atomic this_cpu_foo() operations. Signed-off-by: Greg Thelen <gthelen@xxxxxxxxxx> Reported-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> Cc: Johannes Weiner <jweiner@xxxxxxxxxx> Cc: Valdis Kletnieks <Valdis.Kletnieks@xxxxxx> Cc: Balbir Singh <bsingharora@xxxxxxxxx> Cc: Daisuke Nishimura <nishimura@xxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memcontrol.c | 24 +++++++++--------------- 1 file changed, 9 insertions(+), 15 deletions(-) diff -puN mm/memcontrol.c~memcg-remove-unneeded-preempt_disable mm/memcontrol.c --- a/mm/memcontrol.c~memcg-remove-unneeded-preempt_disable +++ a/mm/memcontrol.c @@ -664,26 +664,22 @@ static unsigned long mem_cgroup_read_eve static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, bool file, int nr_pages) { - preempt_disable(); - if (file) - __this_cpu_add(memcg->stat->count[MEM_CGROUP_STAT_CACHE], + this_cpu_add(memcg->stat->count[MEM_CGROUP_STAT_CACHE], nr_pages); else - __this_cpu_add(memcg->stat->count[MEM_CGROUP_STAT_RSS], + this_cpu_add(memcg->stat->count[MEM_CGROUP_STAT_RSS], nr_pages); /* pagein of a big page is an event. So, ignore page size */ - if (nr_pages > 0) - __this_cpu_inc(memcg->stat->events[MEM_CGROUP_EVENTS_PGPGIN]); - else { - __this_cpu_inc(memcg->stat->events[MEM_CGROUP_EVENTS_PGPGOUT]); + if (nr_pages > 0) { + this_cpu_inc(memcg->stat->events[MEM_CGROUP_EVENTS_PGPGIN]); + } else { + this_cpu_inc(memcg->stat->events[MEM_CGROUP_EVENTS_PGPGOUT]); nr_pages = -nr_pages; /* for event */ } - __this_cpu_add(memcg->stat->events[MEM_CGROUP_EVENTS_COUNT], nr_pages); - - preempt_enable(); + this_cpu_add(memcg->stat->events[MEM_CGROUP_EVENTS_COUNT], nr_pages); } unsigned long @@ -2704,10 +2700,8 @@ static int mem_cgroup_move_account(struc if (PageCgroupFileMapped(pc)) { /* Update mapped_file data for mem_cgroup */ - preempt_disable(); - __this_cpu_dec(from->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); - __this_cpu_inc(to->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); - preempt_enable(); + this_cpu_dec(from->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); + this_cpu_inc(to->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); } mem_cgroup_charge_statistics(from, PageCgroupCache(pc), -nr_pages); if (uncharge) _ Patches currently in -mm which might be from gthelen@xxxxxxxxxx are mm-memcg-remove-needless-recursive-preemption-disabling.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html