On Fri, Sep 22, 2017 at 03:01:41PM +0900, Minchan Kim wrote: > On Thu, Sep 21, 2017 at 01:27:11PM -0700, Shaohua Li wrote: > > From: Shaohua Li <shli@xxxxxx> > > > > MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear > > SwapBacked). There is no lock to prevent the page is added to swap cache > > between these two steps by page reclaim. If page reclaim finds such > > page, it will simply add the page to swap cache without pageout the page > > to swap because the page is marked as clean. Next time, page fault will > > read data from the swap slot which doesn't have the original data, so we > > have a data corruption. To fix issue, we mark the page dirty and pageout > > the page. > > > > However, we shouldn't dirty all pages which is clean and in swap cache. > > swapin page is swap cache and clean too. So we only dirty page which is > > added into swap cache in page reclaim, which shouldn't be swapin page. > > Normal anonymous pages should be dirty already. > > > > Reported-and-tested-y: Artem Savkov <asavkov@xxxxxxxxxx> > > Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages) > > Signed-off-by: Shaohua Li <shli@xxxxxx> > > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > > Cc: Michal Hocko <mhocko@xxxxxxxx> > > Cc: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx> > > Cc: Minchan Kim <minchan@xxxxxxxxxx> > > Cc: Hugh Dickins <hughd@xxxxxxxxxx> > > Cc: Rik van Riel <riel@xxxxxxxxxx> > > Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > --- > > mm/vmscan.c | 12 ++++++++++++ > > 1 file changed, 12 insertions(+) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index d811c81..820ee8d 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -980,6 +980,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > int may_enter_fs; > > enum page_references references = PAGEREF_RECLAIM_CLEAN; > > bool dirty, writeback; > > + bool new_swap_page = false; > > > > cond_resched(); > > > > @@ -1165,6 +1166,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > > > /* Adding to swap updated mapping */ > > mapping = page_mapping(page); > > + new_swap_page = true; > > } > > } else if (unlikely(PageTransHuge(page))) { > > /* Split file THP */ > > @@ -1185,6 +1187,16 @@ static unsigned long shrink_page_list(struct list_head *page_list, > > nr_unmap_fail++; > > goto activate_locked; > > } > > + > > + /* > > + * MADV_FREE clear pte dirty bit, but not yet clear > > + * SwapBacked for a page. We can't directly free the > > + * page because we already set swap entry in pte. The > > + * check guarantees this is such page and not a clean > > + * swapin page > > + */ > > + if (!PageDirty(page) && new_swap_page) > > + set_page_dirty(page); > > } > > > > if (PageDirty(page)) { > > -- > > 2.9.5 > > > > Couldn't we simple roll back to the logic before MADV_FREE's birth? > > diff --git a/mm/swap_state.c b/mm/swap_state.c > index 71ce2d1ccbf7..548c19b5f78e 100644 > --- a/mm/swap_state.c > +++ b/mm/swap_state.c > @@ -231,7 +231,7 @@ int add_to_swap(struct page *page) > * deadlock in the swap out path. > */ > /* > - * Add it to the swap cache. > + * Add it to the swap cache and mark it dirty > */ > err = add_to_swap_cache(page, entry, > __GFP_HIGH|__GFP_NOMEMALLOC|__GFP_NOWARN); > @@ -243,6 +243,7 @@ int add_to_swap(struct page *page) > */ > goto fail; > > + SetPageDirty(page); > return 1; > > fail: > > To me, it would be more simple/readable rather than introducing > a new branch in complicated shrink_page_list. This is neat, thanks for the suggestion! I'll use set_page_dirty, becuase swapcache set_page_dirty not just set the page dirty. > And I don't see why we cannot merge [1/2] and [2/2]. I feel two separate patches are more clear, but I'll let Andrew decide. Thanks, Shaohua -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>