On Fri, Jun 28, 2024 at 12:06 PM <alexs@xxxxxxxxxx> wrote: > > From: Alex Shi <alexs@xxxxxxxxxx> > > The 1st patch introduces new memory decriptor zpdesc and rename > zspage.first_page to zspage.first_zpdesc, no functional change. > > Originally-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> > Signed-off-by: Alex Shi <alexs@xxxxxxxxxx> > --- > mm/zpdesc.h | 56 +++++++++++++++++++++++++++++++++++++++++++++++++++ > mm/zsmalloc.c | 19 ++++++++--------- > 2 files changed, 66 insertions(+), 9 deletions(-) > create mode 100644 mm/zpdesc.h Hi Alex, thanks for your effort in pushing this forward! > diff --git a/mm/zpdesc.h b/mm/zpdesc.h > new file mode 100644 > index 000000000000..a1ab5ebaa936 > --- /dev/null > +++ b/mm/zpdesc.h > @@ -0,0 +1,56 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* zpdesc.h: zswap.zpool memory descriptor > + * > + * Written by Alex Shi <alexs@xxxxxxxxxx> > + * Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> > + */ > +#ifndef __MM_ZPDESC_H__ > +#define __MM_ZPDESC_H__ > + > +/* > + * struct zpdesc - Memory descriptor for zpool memory, now is for zsmalloc > + * @flags: Page flags, PG_private: identifies the first component page > + * @lru: Indirected used by page migration maybe Indirected -> Indirectly? > + * @next: Next zpdesc in a zspage in zsmalloc zpool > + * @handle: For huge zspage in zsmalloc zpool > + * @zspage: Pointer to zspage in zsmalloc > + * > + * This struct overlays struct page for now. Do not modify without a good > + * understanding of the issues. > + */ > +struct zpdesc { > + unsigned long flags; > + struct list_head lru; > + unsigned long _zp_pad_1; for understanding, I think it'd be better to replace _zp_pad_1 with movable ops, because mops reuses this 'mapping' field. > + union { > + /* Next zpdescs in a zspage in zsmalloc zpool */ > + struct zpdesc *next; > + /* For huge zspage in zsmalloc zpool */ > + unsigned long handle; > + }; > + struct zspage *zspage; There was a discussion with Yosry on including memcg_data on zpdesc even if it's not used at the moment. Maybe you can look at: https://lore.kernel.org/linux-mm/CAB=+i9Quz9iP2-Lq=oQfKVVnzPDtOaKMm=hUPbnRg5hRxH+qaA@xxxxxxxxxxxxxx/ > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index fec1a39e5bbe..67bb80b7413a 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -13,17 +13,17 @@ > > /* > * Following is how we use various fields and flags of underlying > - * struct page(s) to form a zspage. > + * struct zpdesc(page) to form a zspage. > * > - * Usage of struct page fields: > - * page->private: points to zspage > - * page->index: links together all component pages of a zspage > + * Usage of struct zpdesc fields: > + * zpdesc->zspage: points to zspage > + * zpdesc->next: links together all component pages of a zspage > * For the huge page, this is always 0, so we use this field > * to store handle. > * page->page_type: PG_zsmalloc, lower 16 bit locate the first object > * offset in a subpage of a zspage > * > - * Usage of struct page flags: > + * Usage of struct zpdesc(page) flags: > * PG_private: identifies the first component page > * PG_owner_priv_1: identifies the huge component page the comment for PG_owner_priv_1 can safely be removed as it's not used after commit a41ec880aa7b ("zsmalloc: move huge compressed obj from page to zspage") > @@ -948,7 +949,7 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, > set_page_private(page, (unsigned long)zspage); > page->index = 0; > if (i == 0) { > - zspage->first_page = page; > + zspage->first_zpdesc = page_zpdesc(page); > SetPagePrivate(page); > if (unlikely(class->objs_per_zspage == 1 && > class->pages_per_zspage == 1)) > @@ -1325,7 +1326,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, > link->handle = handle; > else > /* record handle to page->index */ > - zspage->first_page->index = handle; > + zspage->first_zpdesc->handle = handle; FYI this line seems to conflict with bcc6116e39f512 ("mm/zsmalloc: move record_obj() into obj_malloc()") on mm-unstable. Best, Hyeonggon