Re: [RFC PATCH 1/4] perf: Convert perf_mmap_(alloc,free)_page to folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 21, 2023 at 09:20:13PM +0100, Matthew Wilcox (Oracle) wrote:
> Use the folio API to alloc & free memory.  Also pass in the node ID
> instead of the CPU ID since we already did that calculation in the
> caller.  And call numa_mem_id() instead of leaving the node ID as
> -1 and having the MM call numa_mem_id() each time.

It's a bit of a nit or perhaps an aside, but with this change if you passed
-1 to the newly invoked __folio_alloc_node() you will also trigger a
VM_BUG_ON() :)

>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
> ---
>  kernel/events/ring_buffer.c | 26 ++++++++++++--------------
>  1 file changed, 12 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
> index fb1e180b5f0a..cc90d5299005 100644
> --- a/kernel/events/ring_buffer.c
> +++ b/kernel/events/ring_buffer.c
> @@ -770,7 +770,7 @@ void rb_free_aux(struct perf_buffer *rb)
>  #ifndef CONFIG_PERF_USE_VMALLOC
>
>  /*
> - * Back perf_mmap() with regular GFP_KERNEL-0 pages.
> + * Back perf_mmap() with regular GFP_KERNEL pages.
>   */
>
>  static struct page *
> @@ -785,25 +785,23 @@ __perf_mmap_to_page(struct perf_buffer *rb, unsigned long pgoff)
>  	return virt_to_page(rb->data_pages[pgoff - 1]);
>  }
>
> -static void *perf_mmap_alloc_page(int cpu)
> +static void *perf_mmap_alloc_page(int node)

Nitty point but since we're dealing with folios here maybe rename to
perf_mmap_alloc_folio()?

>  {
> -	struct page *page;
> -	int node;
> +	struct folio *folio;
>
> -	node = (cpu == -1) ? cpu : cpu_to_node(cpu);
> -	page = alloc_pages_node(node, GFP_KERNEL | __GFP_ZERO, 0);
> -	if (!page)
> +	folio = __folio_alloc_node(GFP_KERNEL | __GFP_ZERO, 0, node);
> +	if (!folio)
>  		return NULL;
>
> -	return page_address(page);
> +	return folio_address(folio);
>  }
>
>  static void perf_mmap_free_page(void *addr)

Same comment re: rename -> perf_mmap_free_folio()

>  {
> -	struct page *page = virt_to_page(addr);
> +	struct folio *folio = virt_to_folio(addr);
>
> -	page->mapping = NULL;
> -	__free_page(page);
> +	folio->mapping = NULL;
> +	folio_put(folio);
>  }
>
>  struct perf_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
> @@ -818,17 +816,17 @@ struct perf_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags)
>  	if (order_base_2(size) > PAGE_SHIFT+MAX_ORDER)
>  		goto fail;
>
> -	node = (cpu == -1) ? cpu : cpu_to_node(cpu);
> +	node = (cpu == -1) ? numa_mem_id() : cpu_to_node(cpu);

This is a good change in general, it's pretty yucky that this code just assumes
-1 == NUMA_NO_NODE and that all invoked functions would handle this correctly.

>  	rb = kzalloc_node(size, GFP_KERNEL, node);
>  	if (!rb)
>  		goto fail;
>
> -	rb->user_page = perf_mmap_alloc_page(cpu);
> +	rb->user_page = perf_mmap_alloc_page(node);
>  	if (!rb->user_page)
>  		goto fail_user_page;
>
>  	for (i = 0; i < nr_pages; i++) {
> -		rb->data_pages[i] = perf_mmap_alloc_page(cpu);
> +		rb->data_pages[i] = perf_mmap_alloc_page(node);
>  		if (!rb->data_pages[i])
>  			goto fail_data_pages;
>  	}
> --
> 2.40.1
>

Reviewed-by: Lorenzo Stoakes <lstoakes@xxxxxxxxx>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux