On Tue, May 12, 2015 at 01:18:39PM +0300, Vladimir Davydov wrote: > As noted by Paul the compiler is free to store a temporary result in a > variable on stack, heap or global unless it is explicitly marked as > volatile, see: > > http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4455.html#sample-optimizations > > This can result in a race between do_wp_page() and shrink_active_list() > as follows. > > In do_wp_page() we can call page_move_anon_rmap(), which sets > page->mapping as follows: > > anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; > page->mapping = (struct address_space *) anon_vma; > > The page in question may be on an LRU list, because nowhere in > do_wp_page() we remove it from the list, neither do we take any LRU > related locks. Although the page is locked, shrink_active_list() can > still call page_referenced() on it concurrently, because the latter does > not require an anonymous page to be locked: > > CPU0 CPU1 > ---- ---- > do_wp_page shrink_active_list > lock_page page_referenced > PageAnon->yes, so skip trylock_page > page_move_anon_rmap > page->mapping = anon_vma > rmap_walk > PageAnon->no > rmap_walk_file > BUG > page->mapping += PAGE_MAPPING_ANON > > This patch fixes this race by explicitly forbidding the compiler to > split page->mapping store in page_move_anon_rmap() with the aid of > WRITE_ONCE. > > Signed-off-by: Vladimir Davydov <vdavydov@xxxxxxxxxxxxx> > Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> > Cc: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> > Cc: Rik van Riel <riel@xxxxxxxxxx> > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > --- The paper says "This requires escape analysis: blah blah for this optimization to be valid" So, I'm not sure it's the case but admit we couldn't guarantee all of compiler optimization technique so I am in favor of the patch to make sure future-proof with upcoming suprising compiler technique. Another review point I had is whether lockless page in shrink_active_list could be turn into PageKsm in the middle of page_referenced. IOW, page_referenced PageAnon && !PageKsm -> true so avoid try_lockpage <... amount of stall start > Other cpu makes the page into PageKsm <... amount of stall end > rmap_walk PageKsm-> true rmap_walk_ksm -> bang because ksm expect the passed page was locked However, we increased page->count in isolate_lru_page before passing the page in page_referenced so KSM cannot make the page KsmPage so it's safe. Acked-by: Minchan Kim <minchan@xxxxxxxxxx> -- Kind regards, Minchan Kim -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>