Re: [PATCH 7/9] mm: Free up PG_slab

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/3/21 22:24, Matthew Wilcox (Oracle) wrote:
> Reclaim the Slab page flag by using a spare bit in PageType.  We are
> perennially short of page flags for various purposes, and now that
> the original SLAB allocator has been retired, SLUB does not use the
> mapcount/page_type field.  This lets us remove a number of special cases
> for ignoring mapcount on Slab pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> ---
>  include/linux/page-flags.h     | 21 +++++++++++++++++----
>  include/trace/events/mmflags.h |  2 +-
>  mm/memory-failure.c            |  9 ---------
>  mm/slab.h                      |  2 +-
>  4 files changed, 19 insertions(+), 15 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 94eb8a11a321..73e0b17c7728 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -109,7 +109,6 @@ enum pageflags {
>  	PG_active,
>  	PG_workingset,
>  	PG_error,
> -	PG_slab,
>  	PG_owner_priv_1,	/* Owner use. If pagecache, fs may use*/
>  	PG_arch_1,
>  	PG_reserved,
> @@ -524,7 +523,6 @@ PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD)
>  	TESTCLEARFLAG(Active, active, PF_HEAD)
>  PAGEFLAG(Workingset, workingset, PF_HEAD)
>  	TESTCLEARFLAG(Workingset, workingset, PF_HEAD)
> -__PAGEFLAG(Slab, slab, PF_NO_TAIL)
>  PAGEFLAG(Checked, checked, PF_NO_COMPOUND)	   /* Used by some filesystems */
>  
>  /* Xen */
> @@ -931,7 +929,7 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
>  #endif
>  
>  /*
> - * For pages that are never mapped to userspace (and aren't PageSlab),
> + * For pages that are never mapped to userspace,
>   * page_type may be used.  Because it is initialised to -1, we invert the
>   * sense of the bit, so __SetPageFoo *clears* the bit used for PageFoo, and
>   * __ClearPageFoo *sets* the bit used for PageFoo.  We reserve a few high and
> @@ -947,6 +945,7 @@ PAGEFLAG_FALSE(HasHWPoisoned, has_hwpoisoned)
>  #define PG_table	0x00000200
>  #define PG_guard	0x00000400
>  #define PG_hugetlb	0x00000800
> +#define PG_slab		0x00001000
>  
>  #define PageType(page, flag)						\
>  	((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE)
> @@ -1041,6 +1040,20 @@ PAGE_TYPE_OPS(Table, table, pgtable)
>   */
>  PAGE_TYPE_OPS(Guard, guard, guard)
>  
> +FOLIO_TYPE_OPS(slab, slab)
> +
> +/**
> + * PageSlab - Determine if the page belongs to the slab allocator
> + * @page: The page to test.
> + *
> + * Context: Any context.
> + * Return: True for slab pages, false for any other kind of page.
> + */
> +static inline bool PageSlab(const struct page *page)
> +{
> +	return folio_test_slab(page_folio(page));
> +}
> +
>  #ifdef CONFIG_HUGETLB_PAGE
>  FOLIO_TYPE_OPS(hugetlb, hugetlb)
>  #else
> @@ -1121,7 +1134,7 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page)
>  	(1UL << PG_lru		| 1UL << PG_locked	|	\
>  	 1UL << PG_private	| 1UL << PG_private_2	|	\
>  	 1UL << PG_writeback	| 1UL << PG_reserved	|	\
> -	 1UL << PG_slab		| 1UL << PG_active 	|	\
> +	 1UL << PG_active 	|				\
>  	 1UL << PG_unevictable	| __PG_MLOCKED | LRU_GEN_MASK)
>  
>  /*
> diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
> index d55e53ac91bd..e46d6e82765e 100644
> --- a/include/trace/events/mmflags.h
> +++ b/include/trace/events/mmflags.h
> @@ -107,7 +107,6 @@
>  	DEF_PAGEFLAG_NAME(lru),						\
>  	DEF_PAGEFLAG_NAME(active),					\
>  	DEF_PAGEFLAG_NAME(workingset),					\
> -	DEF_PAGEFLAG_NAME(slab),					\
>  	DEF_PAGEFLAG_NAME(owner_priv_1),				\
>  	DEF_PAGEFLAG_NAME(arch_1),					\
>  	DEF_PAGEFLAG_NAME(reserved),					\
> @@ -135,6 +134,7 @@ IF_HAVE_PG_ARCH_X(arch_3)
>  #define DEF_PAGETYPE_NAME(_name) { PG_##_name, __stringify(_name) }
>  
>  #define __def_pagetype_names						\
> +	DEF_PAGETYPE_NAME(slab),					\
>  	DEF_PAGETYPE_NAME(hugetlb),					\
>  	DEF_PAGETYPE_NAME(offline),					\
>  	DEF_PAGETYPE_NAME(guard),					\
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 9349948f1abf..1cb41ba7870c 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1239,7 +1239,6 @@ static int me_huge_page(struct page_state *ps, struct page *p)
>  #define mlock		(1UL << PG_mlocked)
>  #define lru		(1UL << PG_lru)
>  #define head		(1UL << PG_head)
> -#define slab		(1UL << PG_slab)
>  #define reserved	(1UL << PG_reserved)
>  
>  static struct page_state error_states[] = {
> @@ -1249,13 +1248,6 @@ static struct page_state error_states[] = {
>  	 * PG_buddy pages only make a small fraction of all free pages.
>  	 */
>  
> -	/*
> -	 * Could in theory check if slab page is free or if we can drop
> -	 * currently unused objects without touching them. But just
> -	 * treat it as standard kernel for now.
> -	 */
> -	{ slab,		slab,		MF_MSG_SLAB,	me_kernel },

Will it be better to leave the above slab case here to catch possible unhandled obscure races with
slab? Though it looks like slab page shouldn't reach here.

Thanks.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux