Re: [patch 108/192] mm: zram: amend SLAB_RECLAIM_ACCOUNT on zspage_cachep

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 02, 2021 at 10:45:09AM +0800, Zhaoyang Huang wrote:
> On Thu, Jul 1, 2021 at 10:56 PM Minchan Kim <minchan@xxxxxxxxxx> wrote:
> >
> > On Wed, Jun 30, 2021 at 06:52:58PM -0700, Andrew Morton wrote:
> > > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> > > Subject: mm: zram: amend SLAB_RECLAIM_ACCOUNT on zspage_cachep
> > >
> > > Zspage_cachep is found be merged with other kmem cache during test, which
> > > is not good for debug things (zs_pool->zspage_cachep present to be another
> > > kmem cache in memory dumpfile).  It is also neccessary to do so as
> > > shrinker has been registered for zspage.
> > >
> > > Amending this flag can help kernel to calculate SLAB_RECLAIMBLE correctly.
> > >
> > > Link: https://lkml.kernel.org/r/1623137297-29685-1-git-send-email-huangzhaoyang@xxxxxxxxx
> > > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> > > Cc: Minchan Kim <minchan@xxxxxxxxxx>
> > > Cc: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx>
> > > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> >
> > Sorry for the late. I don't think this is correct.
> >
> > It's true "struct zspage" can be freed by zsmalloc's compaction registerred
> > by slab shrinker so tempted to make it SLAB_RECLAIM_ACCOUNT. However, it's
> > quite limited to work only when objects in the zspage are heavily fragmented.
> > Once the compaction is done, zspage are never discardable until objects are
> > fragmented again. It means it could hurt other reclaimable slab page reclaiming
> > since the zspage slab object pins the page.
> IMHO, kmem cache's reclaiming is NOT affected by SLAB_RECLAIM_ACCOUNT
> . This flag just affects kmem cache merge[1], the slab page's migrate
> type[2] and the page's statistics. Actually, zspage's cache DO merged
> with others even without SLAB_RECLAIM_ACCOUNT currently, which maybe
> cause zspage's object will NEVER be discarded.(SLAB_MERGE_SAME
> introduce confusions as people believe the cache will merge with
> others when it set and vice versa)
> 
> [1]
>  struct kmem_cache *find_mergeable(size_t size, size_t align, unsigned
> long flags, const char *name, void (*ctor)(void *))
> ...
>     if ((flags & SLAB_MERGE_SAME) != (s->flags & SLAB_MERGE_SAME))
>      continue;
> 
> [2]
> if (s->flags & SLAB_RECLAIM_ACCOUNT)
>     s->allocflags |= __GFP_RECLAIMABLE;

That's the point here. With SLAB_RECLAIM_ACCOUNT, page allocator
try to allocate pages from MIGRATE_RECLAIMABLE with belief those
objects are easily reclaimable. Say a page has object A, B, C, D
and E. A-D are easily reclaimable but E is hard. What happens is
VM couldn't reclaim the page in the end due to E even though it
already reclaimed A-D. And the such fragmenation could be spread
out entire MIGRATE_RECLAIMABLE pageblocks over time.
That's why I'd like to put zspage into MIGRATE_UNMOVALBE from the
beginning since I don't think it's easily reclaimble once compaction
is done.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux