* Yang Shi | 2013-10-04 09:36:41 [-0700]: >>>--- a/mm/memcontrol.c >>>+++ b/mm/memcontrol.c >>>@@ -2453,8 +2453,11 @@ static void drain_all_stock(struct mem_cgroup *root_memcg, bool sync) >>> if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) { >>> if (cpu == curcpu) >>> drain_local_stock(&stock->work); >>>- else >>>+ else { >>>+ preempt_enable(); >>> schedule_work_on(cpu, &stock->work); >>>+ preempt_disable(); >>>+ } >>> } >>What ensures that you don't switch CPUs between preempt_enable() & >>preempt_disable() and is curcpu != smp_processor_id() ? > >drain_all_stock is called by drain_all_stock_async or >drain_all_stock_sync, and the call in both is protected by mutex: > >if (!mutex_trylock(&percpu_charge_mutex)) > return; > drain_all_stock(root_memcg, false); > mutex_unlock(&percpu_charge_mutex); > > >So, I suppose this should be able to protect from migration? preempt_disable() ensures that the task executing drain_all_stock() is not moved from cpu1 to cpu5. Lets say we run cpu1, on first invocation we get we get moved from cpu1 to cpu5 after preempt_enable(). On the second run we have (1 == 1) and invoke drain_local_stock() the argument is ignored so we execute drain_local_stock() with data of cpu5. Later we schedule work for cpu5 again but we never did it for cpu1. The code here is robust enough that nothing bad happens if drain_local_stock() is invoked twice on one CPU and the system probably survives it if one CPU is skipped. However I would prefer not to have such an example in the queue where it seems that it is okay to just enable preemption and invoke schedule_work_on() because it breaks the assumptions which are made by get_cpu(). >Thanks, >Yang Sebastian -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html