On 10/4/2013 8:46 AM, Sebastian Andrzej Siewior wrote:
* Yang Shi | 2013-09-16 14:09:18 [-0700]:
---
mm/memcontrol.c | 5 ++++-
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 82a187a..9f7cc0f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2453,8 +2453,11 @@ static void drain_all_stock(struct mem_cgroup *root_memcg, bool sync)
if (!test_and_set_bit(FLUSHING_CACHED_CHARGE, &stock->flags)) {
if (cpu == curcpu)
drain_local_stock(&stock->work);
- else
+ else {
+ preempt_enable();
schedule_work_on(cpu, &stock->work);
+ preempt_disable();
+ }
}
What ensures that you don't switch CPUs between preempt_enable() &
preempt_disable() and is curcpu != smp_processor_id() ?
drain_all_stock is called by drain_all_stock_async or
drain_all_stock_sync, and the call in both is protected by mutex:
if (!mutex_trylock(&percpu_charge_mutex))
return;
drain_all_stock(root_memcg, false);
mutex_unlock(&percpu_charge_mutex);
So, I suppose this should be able to protect from migration?
Thanks,
Yang
What about removing the get_cpu() & put_cpu() calls (and the shortcut)?
}
put_cpu();
Sebastian
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html