Re: [PATCH] mm: fix kswap excessive pressure after wrong condition transfer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 1 Jun 2018, Ivan Kalvachev wrote:
> On 5/31/18, Greg Thelen <gthelen@xxxxxxxxxx> wrote:
> > On Thu, May 31, 2018 at 12:34 PM Ivan Kalvachev <ikalvachev@xxxxxxxxx>
> > wrote:
> >>
> >> Fixes commit 69d763fc6d3aee787a3e8c8c35092b4f4960fa5d
> >> (mm: pin address_space before dereferencing it while isolating an LRU
> >> page)
> >>
> >> working code:
> >>
> >>     mapping = page_mapping(page);
> >>     if (mapping && !mapping->a_ops->migratepage)
> >>         return ret;
> >>
> >> buggy code:
> >>
> >>     if (!trylock_page(page))
> >>         return ret;
> >>
> >>     mapping = page_mapping(page);
> >>     migrate_dirty = mapping && mapping->a_ops->migratepage;
> >>     unlock_page(page);
> >>     if (!migrate_dirty)
> >>         return ret;
> >>
> >> The problem is that !(a && b) = (!a || !b) while the old code was (a &&
> >> !b).
> >> The commit message of the buggy commit explains the need for
> >> locking/unlocking
> >> around the check but does not give any reason for the change of the
> >> condition.
> >> It seems to be an unintended change.
> >>
> >> The result of that change is noticeable under swap pressure.
> >> Big memory consumers like browsers would have a lot of pages swapped out,
> >> even pages that are been used actively, causing the process to repeatedly
> >> block for second or longer. At the same time there would be gigabytes of
> >> unused free memory (sometimes half of the total RAM).
> >> The buffers/cache would also be at minimum size.
> >>
> >> Fixes: 69d763fc6d3a ("mm: pin address_space before dereferencing it while
> >> isolating an LRU page")
> >> Signed-off-by: Ivan Kalvachev <ikalvachev@xxxxxxxxx>
> >> ---
> >>  mm/vmscan.c | 4 ++--
> >>  1 file changed, 2 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/mm/vmscan.c b/mm/vmscan.c
> >> index 9b697323a88c..83df26078d13 100644
> >> --- a/mm/vmscan.c
> >> +++ b/mm/vmscan.c
> >> @@ -1418,9 +1418,9 @@ int __isolate_lru_page(struct page *page,
> >> isolate_mode_t mode)
> >>                                 return ret;
> >>
> >>                         mapping = page_mapping(page);
> >> -                       migrate_dirty = mapping &&
> >> mapping->a_ops->migratepage;
> >> +                       migrate_dirty = mapping &&
> >> !mapping->a_ops->migratepage;
> >>                         unlock_page(page);
> >> -                       if (!migrate_dirty)
> >> +                       if (migrate_dirty)
> >>                                 return ret;
> >>                 }
> >>         }
> >> --
> >> 2.17.1
> >
> > This looks like yesterday's https://lkml.org/lkml/2018/5/30/1158
> >
> 
> Yes, it seems to be the same problem.
> It also have better technical description.

Well, your paragraph above on "Big memory consumers" gives a much
better user viewpoint, and a more urgent case for the patch to go in,
to stable if it does not make 4.17.0.

But I am surprised: the change is in a block of code only used in
one of the modes of compaction (not in  reclaim itself), and I thought
it was a mode which gives up quite easily, rather than visibly blocking. 

So I wonder if there's another issue to be improved here,
and the mistreatment of the ex-swap pages just exposed it somehow.
Cc'ing Vlastimil and David in case it triggers any insight from them.

> 
> Such let down.
> It took me so much time to bisect the issue...

Thank you for all your work on it, odd how we found it at the same
time: I was just porting Mel's patch into another tree, had to make
a change near there, and suddenly noticed that the test was wrong.

Hugh

> 
> Well, I hope that the fix will get into 4.17 release in time.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux