Re: [PATCH net-next v3] xsk: support use vaddr as ring

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 14 Feb 2023 15:45:12 +0100, Alexander Lobakin <alexandr.lobakin@xxxxxxxxx> wrote:
> From: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx>
> Date: Tue, 14 Feb 2023 09:51:12 +0800
>
> > When we try to start AF_XDP on some machines with long running time, due
> > to the machine's memory fragmentation problem, there is no sufficient
> > contiguous physical memory that will cause the start failure.
>
> [...]
>
> > @@ -1319,13 +1317,10 @@ static int xsk_mmap(struct file *file, struct socket *sock,
> >
> >  	/* Matches the smp_wmb() in xsk_init_queue */
> >  	smp_rmb();
> > -	qpg = virt_to_head_page(q->ring);
> > -	if (size > page_size(qpg))
> > +	if (size > PAGE_ALIGN(q->ring_size))
>
> You can set q->ring_size as PAGE_ALIGN(size) already at the allocation
> to simplify this. I don't see any other places where you use it.

That's it, but I think it is not particularly appropriate to change the
the semantics of ring_size just for simplify this code. This may make
people feel strange.

I agree with you other opinions.

Thanks.


>
> >  		return -EINVAL;
> >
> > -	pfn = virt_to_phys(q->ring) >> PAGE_SHIFT;
> > -	return remap_pfn_range(vma, vma->vm_start, pfn,
> > -			       size, vma->vm_page_prot);
> > +	return remap_vmalloc_range(vma, q->ring, 0);
> >  }
> >
> >  static int xsk_notifier(struct notifier_block *this,
> > diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c
> > index 6cf9586e5027..247316bdfcbe 100644
> > --- a/net/xdp/xsk_queue.c
> > +++ b/net/xdp/xsk_queue.c
> > @@ -7,6 +7,7 @@
> >  #include <linux/slab.h>
> >  #include <linux/overflow.h>
> >  #include <net/xdp_sock_drv.h>
> > +#include <linux/vmalloc.h>
>
> Alphabetic order maybe?
>
> >
> >  #include "xsk_queue.h"
> >
> > @@ -23,7 +24,6 @@ static size_t xskq_get_ring_size(struct xsk_queue *q, bool umem_queue)
> >  struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
> >  {
> >  	struct xsk_queue *q;
> > -	gfp_t gfp_flags;
> >  	size_t size;
> >
> >  	q = kzalloc(sizeof(*q), GFP_KERNEL);
> > @@ -33,12 +33,10 @@ struct xsk_queue *xskq_create(u32 nentries, bool umem_queue)
> >  	q->nentries = nentries;
> >  	q->ring_mask = nentries - 1;
> >
> > -	gfp_flags = GFP_KERNEL | __GFP_ZERO | __GFP_NOWARN |
> > -		    __GFP_COMP  | __GFP_NORETRY;
> >  	size = xskq_get_ring_size(q, umem_queue);
> >
> > -	q->ring = (struct xdp_ring *)__get_free_pages(gfp_flags,
> > -						      get_order(size));
> > +	q->ring_size = size;
>
> Maybe assign size only after successful allocation?
>
> > +	q->ring = (struct xdp_ring *)vmalloc_user(size);
>
> The cast from `void *` is redundant. It was needed for
> __get_free_pages() since it returns pointer as long.
>
> >  	if (!q->ring) {
> >  		kfree(q);
> >  		return NULL;
> > @@ -52,6 +50,6 @@ void xskq_destroy(struct xsk_queue *q)
> >  	if (!q)
> >  		return;
> >
> > -	page_frag_free(q->ring);
> > +	vfree(q->ring);
> >  	kfree(q);
> >  }
> > diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h
> > index c6fb6b763658..35922b8b92a8 100644
> > --- a/net/xdp/xsk_queue.h
> > +++ b/net/xdp/xsk_queue.h
> > @@ -45,6 +45,7 @@ struct xsk_queue {
> >  	struct xdp_ring *ring;
> >  	u64 invalid_descs;
> >  	u64 queue_empty_descs;
> > +	size_t ring_size;
> >  };
> >
> >  /* The structure of the shared state of the rings are a simple
> Thanks,
> Olek



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux