Re: [PATCH 11/11] mm/vmscan: Allow arbitrary sized pages to be paged out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 16, 2020 at 09:40:10AM +0800, Huang, Ying wrote:
> Matthew Wilcox <willy@xxxxxxxxxxxxx> writes:
> > On Tue, Sep 15, 2020 at 09:40:45AM +0200, SeongJae Park wrote:
> >> On Tue,  8 Sep 2020 20:55:38 +0100 "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> wrote:
> >> > Remove the assumption that a compound page has HPAGE_PMD_NR pins from
> >> > the page cache.
> >> > 
> >> > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> >> > Cc: Huang Ying <ying.huang@xxxxxxxxx>
> >
> >> > -	int page_cache_pins = PageTransHuge(page) && PageSwapCache(page) ?
> >> > -		HPAGE_PMD_NR : 1;
> >> > +	int page_cache_pins = thp_nr_pages(page);
> >> 
> >> Is it ok to remove the PageSwapCache() check?
> >
> > I think so?  My understanding is that it was added in commit bd4c82c22c36
> > to catch shmem pages, but there was really no reason to only do this for
> > shmem pages.
> 
> The original implementation is to write out Anonymous THP (not shmem).
> The code should work after the changing, because now any THP except
> normal Anonymous THP in swap cache will be split during reclaiming
> already.

Actually, that's a problem I just hit.  Simple to reproduce:

git clone git://git.infradead.org/users/willy/pagecache.git
build it, boot it:
mkdir /mnt/scratch; mkfs.xfs /dev/sdb; mount /dev/sdb /mnt/scratch/; dd if=/dev/zero of=/mnt/scratch/bigfile count=2000 bs=1M; umount /mnt/scratch/; mount /dev/sdb /mnt/scratch/; cat /mnt/scratch/bigfile >/dev/null

(the virtual machine i'm using only has 2GB of memory so this forces
vmscan to happen).  Anyway, we quickly run into OOM and get this kind
of report:

 active_anon:307 inactive_anon:4137 isolated_anon:0
  active_file:0 inactive_file:436964 isolated_file:192
  unevictable:0 dirty:0 writeback:0
  slab_reclaimable:3774 slab_unreclaimable:4132
  mapped:40 shmem:320 pagetables:167 bounce:0
  free:24315 free_pcp:0 free_cma:0

A little debugging shows split_huge_page_to_list() is failing because
the page still has page_private set, so the refcount is one higher
than expected.  This patch makes the problem go away:

@@ -1271,10 +1271,6 @@ static unsigned int shrink_page_list(struct list_head *page_list,
                                /* Adding to swap updated mapping */
                                mapping = page_mapping(page);
                        }
-               } else if (unlikely(PageTransHuge(page))) {
-                       /* Split file THP */
-                       if (split_huge_page_to_list(page, page_list))
-                               goto keep_locked;
                }
 
                /*

but I'm not sure what that's going to do to tmpfs/swap.  Could you guide
me here?




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux