Hi Clark, On Fri, May 29, 2015 at 10:48:15AM -0500, Clark Williams wrote: > @@ -5845,7 +5845,7 @@ void mem_cgroup_swapout(struct page *page, > swp_entry_t entry) page_counter_uncharge(&memcg->memory, 1); > > /* XXX: caller holds IRQ-safe mapping->tree_lock */ > - VM_BUG_ON(!irqs_disabled()); > + VM_BUG_ON(!spin_is_locked(&page_mapping(page)->tree_lock)); > > mem_cgroup_charge_statistics(memcg, page, -1); It's not about the lock, it's about preemption. The charge statistics use __this_cpu operations and they're updated from process context and interrupt context both. This function really should do a local_irq_save(). I only added the VM_BUG_ON() to document that we know the caller is holding an IRQ-safe lock and so we don't need to bother with another level of IRQ saving. So how does this translate to RT? I don't know. But if switching to explicit IRQ toggling would help you guys out we can do that. It is in the swapout path after all, the optimization isn't that important. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>