Re: [PATCH V2 1/2] mm: avoid marking swap cached page as lazyfree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 22, 2017 at 11:46:30AM -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@xxxxxx>
> 
> MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> SwapBacked). There is no lock to prevent the page is added to swap cache
> between these two steps by page reclaim. If the page is added to swap
> cache, marking the page lazyfree will confuse page fault if the page is
> reclaimed and refault.

If page is added to swapcache while it stays lru_lazyfree_pvec,
it ends up having !PG_swapbacked, PG_swapcache and !PG_dirty.
Most important thing is PG_dirty. Without it, VM will reclaim the
page without *writeback* so we lose the data.

Although we prevent the page adding to swapcache, we lose the data
unless we apply [2/2] so this patch alone doesn't fix the problem.
That's why I said to you we don't need to separate patches.

> 
> Reported-and-tested-by: Artem Savkov <asavkov@xxxxxxxxxx>
> Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages)
> Signed-off-by: Shaohua Li <shli@xxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
> Cc: Michal Hocko <mhocko@xxxxxxxx>
> Cc: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx>
> Cc: Minchan Kim <minchan@xxxxxxxxxx>
> Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Reviewed-by: Rik van Riel <riel@xxxxxxxxxx>
> ---
>  mm/swap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index 9295ae9..a77d68f 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
>  			    void *arg)
>  {
>  	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
> -	    !PageUnevictable(page)) {
> +	    !PageSwapCache(page) && !PageUnevictable(page)) {
>  		bool active = PageActive(page);
>  
>  		del_page_from_lru_list(page, lruvec,
> @@ -665,7 +665,7 @@ void deactivate_file_page(struct page *page)
>  void mark_page_lazyfree(struct page *page)
>  {
>  	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
> -	    !PageUnevictable(page)) {
> +	    !PageSwapCache(page) && !PageUnevictable(page)) {
>  		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
>  
>  		get_page(page);
> -- 
> 2.9.5
> 



[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]