On Wed, 18 Jul 2012 at 10:43 GMT, Mel Gorman <mgorman@xxxxxxx> wrote: > + if (!down_read_trylock(&svma->vm_mm->mmap_sem)) { > + mutex_unlock(&mapping->i_mmap_mutex); > + goto retry; > + } > + > + smmap_sem = &svma->vm_mm->mmap_sem; > + spage_table_lock = &svma->vm_mm->page_table_lock; > + spin_lock_nested(spage_table_lock, SINGLE_DEPTH_NESTING); > > saddr = page_table_shareable(svma, vma, addr, idx); > if (saddr) { > @@ -85,6 +108,10 @@ static void huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud) > break; > } > } > + up_read(smmap_sem); > + spin_unlock(spage_table_lock); Looks like we should do spin_unlock() before up_read(), in the reverse order of how they get accquired. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>