On Fri, 13 Feb 2009 08:56:40 +0900 KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > On Thu, 12 Feb 2009 22:28:33 +0530 > Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> wrote: > > > * Ingo Molnar <mingo@xxxxxxx> [2009-02-12 12:28:54]: > > > > > > > > * KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > > > > > > On Thu, 12 Feb 2009 11:21:13 +0100 > > > > Ingo Molnar <mingo@xxxxxxx> wrote: > > > The question is, are these local IRQ flags manipulations really needed > > > in this code, and if yes, why? > > > > We needed the local IRQ flags, since these counters are updated from > > page fault context and from reclaim context with lru_lock held with > > IRQ's disabled. I've been thinking about replacing the spin lock with > > seq lock, but have not gotten to it yet. > > > Hmm ? I can't understand. Why we have to disable IRQ here again ? > And, > - try_to_unmap() is called in shrink_page_list(), there, no zone->lru_lock. > - page fault path doesn't hold zone->lru_lock. > > My concern is only shmem. But I think it doesn't call charge() within lock, actually Clarification :) res_counter_charge() is called from - page fault => under down_read(mmap_sem), lock_page() may be held. IRQ=ENABLED) - add_to_page_cache => under lock_page(), mapping->tree_lock is *not* held, IRQ=DISABLED - shmem => info->lock is held, we use __GFP_NOWAIT here. IRQ=ENABLED - shmem => info->lock is *not* held with GFP_KERNEL here, IRQ=ENABLED. - migration => under lock_page() and mmap_sem, IRQ=ENABLED res_counter_uncharge() is called from - page_remove_rmap()//(Only when ANON) => anon_vma->lock and pte_lock(),lock_page() can be held. IRQ=ENABLED? - remove_from_page_cache() => lock_page() and mapping->tree_lock is held, IRQ=DISABLED. Summary: "Charge" is considered as heavy operation and the call path is placed where the thread can sleep, AMAP. "Uncharge" is considered as light operation and call path is under some # of spinlocks. Bye, -Kame -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html