On Fri, 10 Feb 2023 at 03:14, Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > When we try to start AF_XDP on some machines with long running time, due > to the machine's memory fragmentation problem, there is no sufficient > continuous physical memory that will cause the start failure. > > After AF_XDP fails to apply for continuous physical memory, this patch > tries to use vmalloc() to allocate memory to solve this problem. > > Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > Reported-by: kernel test robot <lkp@xxxxxxxxx> > Link: https://lore.kernel.org/oe-kbuild-all/202302091850.0HBmsDAq-lkp@xxxxxxxxx > --- > net/xdp/xsk.c | 8 +++++--- > net/xdp/xsk_queue.c | 21 +++++++++++++++------ > net/xdp/xsk_queue.h | 1 + > 3 files changed, 21 insertions(+), 9 deletions(-) > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > index 9f0561b67c12..33db57548ee3 100644 > --- a/net/xdp/xsk.c > +++ b/net/xdp/xsk.c > @@ -1296,7 +1296,6 @@ static int xsk_mmap(struct file *file, struct socket *sock, > struct xdp_sock *xs = xdp_sk(sock->sk); > struct xsk_queue *q = NULL; > unsigned long pfn; > - struct page *qpg; > > if (READ_ONCE(xs->state) != XSK_READY) > return -EBUSY; > @@ -1319,10 +1318,13 @@ static int xsk_mmap(struct file *file, struct socket *sock, > > /* Matches the smp_wmb() in xsk_init_queue */ > smp_rmb(); > - qpg = virt_to_head_page(q->ring); > - if (size > page_size(qpg)) > + > + if (PAGE_ALIGN(q->ring_size) < size) > return -EINVAL; > > + if (is_vmalloc_addr(q->ring)) > + return remap_vmalloc_range(vma, q->ring, 0); > + > pfn = virt_to_phys(q->ring) >> PAGE_SHIFT; > return remap_pfn_range(vma, vma->vm_start, pfn, > size, vma->vm_page_prot); > diff --git a/net/xdp/xsk_queue.c b/net/xdp/xsk_queue.c > index 6cf9586e5027..7b03102d1672 100644 > --- a/net/xdp/xsk_queue.c > +++ b/net/xdp/xsk_queue.c > @@ -7,6 +7,7 @@ > #include <linux/slab.h> > #include <linux/overflow.h> > #include <net/xdp_sock_drv.h> > +#include <linux/vmalloc.h> > > #include "xsk_queue.h" > > @@ -37,14 +38,18 @@ struct xsk_queue *xskq_create(u32 nentries, bool umem_queue) > __GFP_COMP | __GFP_NORETRY; > size = xskq_get_ring_size(q, umem_queue); > > + q->ring_size = size; > q->ring = (struct xdp_ring *)__get_free_pages(gfp_flags, > get_order(size)); > - if (!q->ring) { > - kfree(q); > - return NULL; > - } > + if (q->ring) > + return q; > + > + q->ring = (struct xdp_ring *)vmalloc_user(size); > + if (q->ring) > + return q; Thanks for bringing this to attention. Interesting to see how hard it gets after a while to find consecutive memory since this is not a large area. I am wondering if it would be better to remove the __get_free_pages() and just go for vmalloc_user. There is no particular reason here for allocating consecutive physical pages for the ring. Does anyone see any problem with removing this? If not, please just remove __get_free_pages(), test it, and post a v2. > - return q; > + kfree(q); > + return NULL; > } > > void xskq_destroy(struct xsk_queue *q) > @@ -52,6 +57,10 @@ void xskq_destroy(struct xsk_queue *q) > if (!q) > return; > > - page_frag_free(q->ring); > + if (is_vmalloc_addr(q->ring)) > + vfree(q->ring); > + else > + page_frag_free(q->ring); > + > kfree(q); > } > diff --git a/net/xdp/xsk_queue.h b/net/xdp/xsk_queue.h > index c6fb6b763658..35922b8b92a8 100644 > --- a/net/xdp/xsk_queue.h > +++ b/net/xdp/xsk_queue.h > @@ -45,6 +45,7 @@ struct xsk_queue { > struct xdp_ring *ring; > u64 invalid_descs; > u64 queue_empty_descs; > + size_t ring_size; > }; > > /* The structure of the shared state of the rings are a simple > -- > 2.32.0.3.g01195cf9f >