Re: [PATCH 5/6] slab: Allocate frozen pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 31.05.22 17:06, Matthew Wilcox (Oracle) wrote:
> Since slab does not use the page refcount, it can allocate and
> free frozen pages, saving one atomic operation per free.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> ---
>  mm/slab.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/slab.c b/mm/slab.c
> index f8cd00f4ba13..c5c53ed304d1 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1355,23 +1355,23 @@ slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nodeid)
>  static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
>  								int nodeid)
>  {
> -	struct folio *folio;
> +	struct page *page;
>  	struct slab *slab;
>  
>  	flags |= cachep->allocflags;
>  
> -	folio = (struct folio *) __alloc_pages_node(nodeid, flags, cachep->gfporder);
> -	if (!folio) {
> +	page = __alloc_frozen_pages(flags, cachep->gfporder, nodeid, NULL);
> +	if (!page) {
>  		slab_out_of_memory(cachep, flags, nodeid);
>  		return NULL;
>  	}
>  
> -	slab = folio_slab(folio);
> +	__SetPageSlab(page);
> +	slab = (struct slab *)page;
>  
>  	account_slab(slab, cachep->gfporder, cachep, flags);
> -	__folio_set_slab(folio);
>  	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
> -	if (sk_memalloc_socks() && page_is_pfmemalloc(folio_page(folio, 0)))
> +	if (sk_memalloc_socks() && page_is_pfmemalloc(page))
>  		slab_set_pfmemalloc(slab);
>  
>  	return slab;
> @@ -1383,18 +1383,17 @@ static struct slab *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
>  static void kmem_freepages(struct kmem_cache *cachep, struct slab *slab)
>  {
>  	int order = cachep->gfporder;
> -	struct folio *folio = slab_folio(slab);
> +	struct page *page = (struct page *)slab;
>  
> -	BUG_ON(!folio_test_slab(folio));
>  	__slab_clear_pfmemalloc(slab);
> -	__folio_clear_slab(folio);
> -	page_mapcount_reset(folio_page(folio, 0));
> -	folio->mapping = NULL;
> +	__ClearPageSlab(page);
> +	page_mapcount_reset(page);
> +	page->mapping = NULL;
>  
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += 1 << order;
>  	unaccount_slab(slab, order, cachep);
> -	__free_pages(folio_page(folio, 0), order);
> +	free_frozen_pages(page, order);
>  }
>  
>  static void kmem_rcu_free(struct rcu_head *head)

I assume that implies that pages that are actually allocated *from* the
buddy now have a refcount == 0.

IIRC, page isolation code (e.g., !page_ref_count check in
has_unmovable_pages()) assumes that any page with a refcount of 0 is
essentially either already free (buddy) or is on its way of getting
freed (!buddy).

There might be other PFN walker code (like compaction) that makes
similar assumptions that hold for now.

-- 
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux