From: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> Subject: ksm: fix unlocked iteration over vmas in cmp_and_merge_page() In this place mm is unlocked, so vmas or list may change. Down read mmap_sem to protect them from modifications. Link: http://lkml.kernel.org/r/150512788393.10691.8868381099691121308.stgit@localhost.localdomain Fixes: e86c59b1b12d ("mm/ksm: improve deduplication of zero pages with colouring") Signed-off-by: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx> Acked-by: Michal Hocko <mhocko@xxxxxxxx> Reviewed-by: Andrea Arcangeli <aarcange@xxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: zhong jiang <zhongjiang@xxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx> Cc: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxxxxxxx> Cc: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/ksm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff -puN mm/ksm.c~ksm-fix-unlocked-iteration-over-vmas-in-cmp_and_merge_page mm/ksm.c --- a/mm/ksm.c~ksm-fix-unlocked-iteration-over-vmas-in-cmp_and_merge_page +++ a/mm/ksm.c @@ -1990,6 +1990,7 @@ static void stable_tree_append(struct rm */ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item) { + struct mm_struct *mm = rmap_item->mm; struct rmap_item *tree_rmap_item; struct page *tree_page = NULL; struct stable_node *stable_node; @@ -2062,9 +2063,11 @@ static void cmp_and_merge_page(struct pa if (ksm_use_zero_pages && (checksum == zero_checksum)) { struct vm_area_struct *vma; - vma = find_mergeable_vma(rmap_item->mm, rmap_item->address); + down_read(&mm->mmap_sem); + vma = find_mergeable_vma(mm, rmap_item->address); err = try_to_merge_one_page(vma, page, ZERO_PAGE(rmap_item->address)); + up_read(&mm->mmap_sem); /* * In case of failure, the page was not really empty, so we * need to continue. Otherwise we're done. _