On Fri, 24 Aug 2012 22:37:55 +0800 Wanpeng Li <liwanp@xxxxxxxxxxxxxxxxxx> wrote: > From: Gavin Shan <shangw@xxxxxxxxxxxxxxxxxx> > > While registering MMU notifier, new instance of MMU notifier_mm will > be allocated and later free'd if currrent mm_struct's MMU notifier_mm > has been initialized. That cause some overhead. The patch tries to > eleminate that. > > Signed-off-by: Gavin Shan <shangw@xxxxxxxxxxxxxxxxxx> > Signed-off-by: Wanpeng Li <liwanp@xxxxxxxxxxxxxxxxxx> > --- > mm/mmu_notifier.c | 22 +++++++++++----------- > 1 files changed, 11 insertions(+), 11 deletions(-) > > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c > index 862b608..fb4067f 100644 > --- a/mm/mmu_notifier.c > +++ b/mm/mmu_notifier.c > @@ -192,22 +192,23 @@ static int do_mmu_notifier_register(struct mmu_notifier *mn, > > BUG_ON(atomic_read(&mm->mm_users) <= 0); > > - ret = -ENOMEM; > - mmu_notifier_mm = kmalloc(sizeof(struct mmu_notifier_mm), GFP_KERNEL); > - if (unlikely(!mmu_notifier_mm)) > - goto out; > - > if (take_mmap_sem) > down_write(&mm->mmap_sem); > ret = mm_take_all_locks(mm); > if (unlikely(ret)) > - goto out_cleanup; > + goto out; > > if (!mm_has_notifiers(mm)) { > + mmu_notifier_mm = kmalloc(sizeof(struct mmu_notifier_mm), > + GFP_ATOMIC); Why was the code switched to the far weaker GFP_ATOMIC? We can still perform sleeping allocations inside mmap_sem. > + if (unlikely(!mmu_notifier_mm)) { > + ret = -ENOMEM; > + goto out_of_mem; > + } > INIT_HLIST_HEAD(&mmu_notifier_mm->list); > spin_lock_init(&mmu_notifier_mm->lock); > + > mm->mmu_notifier_mm = mmu_notifier_mm; > - mmu_notifier_mm = NULL; > } > atomic_inc(&mm->mm_count); > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>