Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes: > On Fri, 2010-04-23 at 13:17 -0700, Greg Thelen wrote: >> - lock_page_cgroup(pc); >> + /* >> + * Unless a page's cgroup reassignment is possible, then avoid grabbing >> + * the lock used to protect the cgroup assignment. >> + */ >> + rcu_read_lock(); > > Where is the matching barrier? Good catch. A call to smp_wmb() belongs in mem_cgroup_begin_page_cgroup_reassignment() like so: static void mem_cgroup_begin_page_cgroup_reassignment(void) { VM_BUG_ON(mem_cgroup_account_move_ongoing); mem_cgroup_account_move_ongoing = true; smp_wmb(); synchronize_rcu(); } I'll add this to the patch. >> + smp_rmb(); >> + if (unlikely(mem_cgroup_account_move_ongoing)) { >> + local_irq_save(flags); > > So the added irq-disable is a bug-fix? The irq-disable is not needed for current code, only for upcoming per-memcg dirty page accounting which will be refactoring mem_cgroup_update_file_mapped() into a generic memcg stat update routine. I assume these locking changes should be bundled with the dependant memcg dirty page accounting changes which need the ability to update counters from irq routines. I'm sorry I didn't make that more clear. >> + lock_page_cgroup(pc); >> + locked = true; >> + } >> + >> mem = pc->mem_cgroup; >> if (!mem || !PageCgroupUsed(pc)) >> goto done; >> @@ -1449,6 +1468,7 @@ void mem_cgroup_update_file_mapped(struct page *page, int val) >> /* >> * Preemption is already disabled. We can use __this_cpu_xxx >> */ >> + VM_BUG_ON(preemptible()); > > Insta-bug here, there is nothing guaranteeing we're not preemptible > here. My addition of VM_BUG_ON() was to programmatic assert what the comment was asserting. All callers of mem_cgroup_update_file_mapped() hold the pte spinlock, which disables preemption. So I don't think this VM_BUG_ON() will cause panic. A function level comment for mem_cgroup_update_file_mapped() declaring that "callers must have preemption disabled" will be added to make this more clear. >> if (val > 0) { >> __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); >> SetPageCgroupFileMapped(pc); >> @@ -1458,7 +1478,11 @@ void mem_cgroup_update_file_mapped(struct page *page, int val) >> } >> >> done: >> - unlock_page_cgroup(pc); >> + if (unlikely(locked)) { >> + unlock_page_cgroup(pc); >> + local_irq_restore(flags); >> + } >> + rcu_read_unlock(); >> } -- Greg -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>