Hi Andrew, On Wed, Dec 18, 2013 at 04:28:58PM -0800, Andrew Morton wrote: >On Thu, 19 Dec 2013 08:16:35 +0800 Wanpeng Li <liwanp@xxxxxxxxxxxxxxxxxx> wrote: > >> page_get_anon_vma() called in page_referenced_anon() will lock and >> increase the refcount of anon_vma, page won't be locked for anonymous >> page. This patch fix it by skip check anonymous page locked. >> >> [ 588.698828] kernel BUG at mm/rmap.c:1663! > >Why is all this suddenly happening. Did we change something, or did a >new test get added to trinity? > They are introduced by Joonsoo's rmap_walk. >> --- a/mm/rmap.c >> +++ b/mm/rmap.c >> @@ -1660,7 +1660,8 @@ done: >> >> int rmap_walk(struct page *page, struct rmap_walk_control *rwc) >> { >> - VM_BUG_ON(!PageLocked(page)); >> + if (!PageAnon(page) || PageKsm(page)) >> + VM_BUG_ON(!PageLocked(page)); >> >> if (unlikely(PageKsm(page))) >> return rmap_walk_ksm(page, rwc); > >Is there any reason why rmap_walk_ksm() and rmap_walk_file() *need* >PageLocked() whereas rmap_walk_anon() does not? If so, let's implement >it like this: All callsites of rmap_walk() try_to_unmap() page should be lockecd (checked in rmap_walk()) try_to_munlock() pages should be locked (checked in try_to_munlock()) page_referenced() pages should be locked except anonymous page (checked in rmap_walk()) page_mkclean() pages should be locked (checked in page_mkclean()) remove_migration_ptes() pages should be locked (checked in rmap_walk()) We can move PageLocked(page) check to the callsites instead of in rmap_walk() since anonymous page is not locked in page_referenced(). Regards, Wanpeng Li > > >--- a/mm/rmap.c~a >+++ a/mm/rmap.c >@@ -1716,6 +1716,10 @@ static int rmap_walk_file(struct page *p > struct vm_area_struct *vma; > int ret = SWAP_AGAIN; > >+ /* >+ * page must be locked because <reason goes here> >+ */ >+ VM_BUG_ON(!PageLocked(page)); > if (!mapping) > return ret; > mutex_lock(&mapping->i_mmap_mutex); >@@ -1737,8 +1741,6 @@ static int rmap_walk_file(struct page *p > int rmap_walk(struct page *page, int (*rmap_one)(struct page *, > struct vm_area_struct *, unsigned long, void *), void *arg) > { >- VM_BUG_ON(!PageLocked(page)); >- > if (unlikely(PageKsm(page))) > return rmap_walk_ksm(page, rmap_one, arg); > else if (PageAnon(page)) >--- a/mm/ksm.c~a >+++ a/mm/ksm.c >@@ -2006,6 +2006,9 @@ int rmap_walk_ksm(struct page *page, int > int search_new_forks = 0; > > VM_BUG_ON(!PageKsm(page)); >+ /* >+ * page must be locked because <reason goes here> >+ */ > VM_BUG_ON(!PageLocked(page)); > > stable_node = page_stable_node(page); > > >Or if there is no reason why the page must be locked for >rmap_walk_ksm() and rmap_walk_file(), let's just remove rmap_walk()'s >VM_BUG_ON()? And rmap_walk_ksm()'s as well - it's duplicative anyway. > >-- >To unsubscribe, send a message with 'unsubscribe linux-mm' in >the body to majordomo@xxxxxxxxx. For more info on Linux MM, >see: http://www.linux-mm.org/ . >Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>