Re: [PATCH v5 00/21] mm/zsmalloc: add zpdesc memory descriptor for zswap.zpool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 29, 2024 at 05:42:06PM +0800, Alex Shi wrote:
> 
> 
> On 8/28/24 7:19 AM, Vishal Moola wrote:
> > On Wed, Aug 14, 2024 at 03:03:54PM +0900, Sergey Senozhatsky wrote:
> >> On (24/08/08 04:37), Matthew Wilcox wrote:
> >> [..]
> >>>> So I guess if we have something
> >>>>
> >>>> struct zspage {
> >>>> 	..
> >>>> 	struct zpdesc *first_desc;
> >>>> 	..
> >>>> }
> >>>>
> >>>> and we "chain" zpdesc-s to form a zspage, and make each of them point to
> >>>> a corresponding struct page (memdesc -> *page), then it'll resemble current
> >>>> zsmalloc and should work for everyone? I also assume for zspdesc-s zsmalloc
> >>>> will need to maintain a dedicated kmem_cache?
> >>>
> >>> Right, we could do that.  Each memdesc has to be a multiple of 16 bytes,
> >>> sp we'd be doing something like allocating 32 bytes for each page.
> >>> Is there really 32 bytes of information that we want to store for
> >>> each page?  Or could we store all of the information in (a somewhat
> >>> larger) zspage?  Assuming we allocate 3 pages per zspage, if we allocate
> >>> an extra 64 bytes in the zspage, we've saved 32 bytes per zspage.
> >>
> >> I certainly like (and appreciate) the approach that saves us
> >> some bytes here and there.  zsmalloc page can consist of 1 to
> >> up to CONFIG_ZSMALLOC_CHAIN_SIZE (max 16) physical pages.  I'm
> >> trying to understand (in pseudo-C code) what does a "somewhat larger
> >> zspage" mean.  A fixed size array (given that we know the max number
> >> of physical pages) per-zspage?
> > 
> > I haven't had the opportunity to respond until now as I was on vacation.
> > 
> > With the current approach in a memdesc world, we would do the following:
> > 
> > 1) kmem_cache_alloc() every single Zpdesc
> > 2) Allocate a memdesc/page that points to its own Zpdesc
> > 3) Access/Track Zpdescs directly
> > 4) Use those Zpdescs to build a Zspage
> > 
> > An alternative approach would move more metadata storage from a Zpdesc
> > into a Zspage instead. That extreme would leave us with:
> > 
> > 1) kmem_cache_alloc() once for a Zspage
> > 2) Allocate a memdesc/page that points to the Zspage
> > 3) Use the Zspage to access/track its own subpages (through some magic
> > we would have to figure out)
> > 4) Zpdescs are just Zspages (since all the information would be in a Zspage)
> > 
> > IMO, we should introduce zpdescs first, then start to shift
> > metadata from "struct zpdesc" into "struct zspage" until we no longer
> > need "struct zpdesc". My big concern is whether or not this patchset works
> > towards those goals. Will it make consolidating the metadata easier? And are
> > these goals feasible (while maintaining the wins of zsmalloc)? Or should we
> > aim to leave zsmalloc as it is currently implemented?
> 
> Uh, correct me if I am wrong.
> 
> IMHO, regarding what this patchset does, it abstracts the memory descriptor usage
> for zswap/zram. 

Sorry, I misunderstood the patchset. I thought it was creating a
descriptor specifically for zsmalloc, when it seems like this is supposed to
be a generic descriptor for all zpool allocators. The code comments and commit
subjects are misleading and should be changed to reflect that.

I'm onboard for using zpdesc for zbud and z3fold as well (or we'd have to come
up with some other plan for them as well). Once we have a plan all the
maintainers agree on we can all be on our merry way :)

The questions for all the zpool allocator maintainers are:
1) Does your allocator need the space its using in struct page (aka
would it need a descriptor in a memdesc world)?

2) Is it feasible to store the information elsewhere (outside of struct
page)? And how much effort would that code conversion be?

Thoughts? Seth/Dan, Vitaly/Miahoe, and Sergey?

> The descriptor still overlays the struct page; nothing has changed
> in that regard. What this patchset accomplishes is the use of folios in the guts
> to save some code size, and the introduction of a new concept, zpdesc. 
> This patchset is just an initial step; it does not bias the potential changes to 
> kmem_alloc or larger zspage modifications. In fact, both approaches require this
> fundamental abstract concept: zpdesc. 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux