Re: [PATCH net-next 1/4] net: xdp: introduce bulking for xdp tx return path

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 27 Oct 2020 20:04:07 +0100
Lorenzo Bianconi <lorenzo@xxxxxxxxxx> wrote:

> Introduce bulking capability in xdp tx return path (XDP_REDIRECT).
> xdp_return_frame is usually run inside the driver NAPI tx completion
> loop so it is possible batch it.
> Current implementation considers only page_pool memory model.
> Convert mvneta driver to xdp_return_frame_bulk APIs.
> 
> Suggested-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx>

I think you/we have to explain better in this commit message, what the
idea/concept behind this bulk return is.  Or even explain this as a
comment above "xdp_return_frame_bulk".

Maybe add/append text to commit below:

The bulk API introduced is a defer and flush API, that will defer
the return if the xdp_mem_allocator object is the same, identified
via the mem.id field (xdp_mem_info).  Thus, the flush operation will
operate on the same xdp_mem_allocator object.

The bulk queue size of 16 is no coincident.  This is connected to how
XDP redirect will bulk xmit (upto 16) frames. Thus, the idea is for the
API to find these boundaries (via mem.id match), which is optimal for
both the I-cache and D-cache for the memory allocator code and object.

The data structure (xdp_frame_bulk) used for deferred elements is
stored/allocated on the function call-stack, which allows lockfree
access.


> Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>
> ---
[...]
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index 3814fb631d52..9567110845ef 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -104,6 +104,12 @@ struct xdp_frame {
>  	struct net_device *dev_rx; /* used by cpumap */
>  };
>  
> +#define XDP_BULK_QUEUE_SIZE	16

Maybe "#define DEV_MAP_BULK_SIZE 16" should be def to
XDP_BULK_QUEUE_SIZE, to express the described connection.

> +struct xdp_frame_bulk {
> +	void *q[XDP_BULK_QUEUE_SIZE];
> +	int count;
> +	void *xa;

Just a hunch (not benchmarked), but I think it will be more optimal to
place 'count' and '*xa' above the '*q' array.  (It might not matter at
all, as we cannot control the start alignment, when this is on the
stack.)

> +};
[...]

> diff --git a/net/core/xdp.c b/net/core/xdp.c
> index 48aba933a5a8..93eabd789246 100644
> --- a/net/core/xdp.c
> +++ b/net/core/xdp.c
> @@ -380,6 +380,57 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
>  }
>  EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
>  
> +void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq)
> +{
> +	struct xdp_mem_allocator *xa = bq->xa;
> +	int i;
> +
> +	if (unlikely(!xa))
> +		return;
> +
> +	for (i = 0; i < bq->count; i++) {
> +		struct page *page = virt_to_head_page(bq->q[i]);
> +
> +		page_pool_put_full_page(xa->page_pool, page, false);
> +	}
> +	bq->count = 0;
> +}
> +EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk);
> +

Wondering if we should have a comment that explains the intent and idea
behind this function?

/* Defers return when frame belongs to same mem.id as previous frame */

> +void xdp_return_frame_bulk(struct xdp_frame *xdpf,
> +			   struct xdp_frame_bulk *bq)
> +{
> +	struct xdp_mem_info *mem = &xdpf->mem;
> +	struct xdp_mem_allocator *xa, *nxa;
> +
> +	if (mem->type != MEM_TYPE_PAGE_POOL) {
> +		__xdp_return(xdpf->data, &xdpf->mem, false);
> +		return;
> +	}
> +
> +	rcu_read_lock();
> +
> +	xa = bq->xa;
> +	if (unlikely(!xa || mem->id != xa->mem.id)) {
> +		nxa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> +		if (unlikely(!xa)) {
> +			bq->count = 0;
> +			bq->xa = nxa;
> +			xa = nxa;
> +		}
> +	}
> +
> +	if (mem->id != xa->mem.id || bq->count == XDP_BULK_QUEUE_SIZE)
> +		xdp_flush_frame_bulk(bq);
> +
> +	bq->q[bq->count++] = xdpf->data;
> +	if (mem->id != xa->mem.id)
> +		bq->xa = nxa;
> +
> +	rcu_read_unlock();
> +}
> +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux