In copy_present_page, after we mark the pte non-writable, we should check for previous dirty bit updates and make sure we don't lose the dirty bit on reset. Also, avoid marking the pte write-protected again if copy_present_page already marked it write-protected. Cc: Peter Xu <peterx@xxxxxxxxxx> Cc: Jason Gunthorpe <jgg@xxxxxxxx> Cc: John Hubbard <jhubbard@xxxxxxxxxx> Cc: linux-mm@xxxxxxxxx Cc: linux-kernel@xxxxxxxxxxxxxxx Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Kirill Shutemov <kirill@xxxxxxxxxxxxx> Cc: Hugh Dickins <hughd@xxxxxxxxxx> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx> --- mm/memory.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index bfe202ef6244..f57b1f04d50a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -848,6 +848,9 @@ copy_present_page(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (likely(!page_maybe_dma_pinned(page))) return 1; + if (pte_dirty(*src_pte)) + pte = pte_mkdirty(pte); + /* * Uhhuh. It looks like the page might be a pinned page, * and we actually need to copy it. Now we can set the @@ -904,6 +907,11 @@ copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, if (retval <= 0) return retval; + /* + * Fetch the src pte value again, copy_present_page + * could modify it. + */ + pte = *src_pte; get_page(page); page_dup_rmap(page, false); rss[mm_counter(page)]++; -- 2.26.2