Re: [PATCH bpf-next v2 08/10] xsk: Support UMEM chunk_size > PAGE_SIZE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Is not the max 64K as you test against XDP_UMEM_MAX_CHUNK_SIZE in
> xdp_umem_reg()?

The absolute max is 64K. In the case of HPAGE_SIZE < 64K, then it
would be HPAGE_SIZE.

> > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> > index e96a1151ec75..ed88880d4b68 100644
> > --- a/include/net/xdp_sock.h
> > +++ b/include/net/xdp_sock.h
> > @@ -28,6 +28,9 @@ struct xdp_umem {
> >         struct user_struct *user;
> >         refcount_t users;
> >         u8 flags;
> > +#ifdef CONFIG_HUGETLB_PAGE
>
> Sanity check: have you tried compiling your code without this config set?

Yes. The CI does this also on one of the platforms (hence some of the
bot errors in v1).

> >  static int xdp_umem_pin_pages(struct xdp_umem *umem, unsigned long address)
> >  {
> > +#ifdef CONFIG_HUGETLB_PAGE
>
> Let us try to get rid of most of these #ifdefs sprinkled around the
> code. How about hiding this inside xdp_umem_is_hugetlb() and get rid
> of these #ifdefs below? Since I believe it is quite uncommon not to
> have this config enabled, we could simplify things by always using the
> page_size in the pool, for example. And dito for the one in struct
> xdp_umem. What do you think?

I used #ifdef for `page_size` in the pool for maximum performance when
huge pages are disabled. We could also not worry about optimizing this
uncommon case though since the performance impact is very small.
However, I don't find the #ifdefs excessive either.

> > +static void xp_check_dma_contiguity(struct xsk_dma_map *dma_map, u32 page_size)
> >  {
> > -       u32 i;
> > +       u32 stride = page_size >> PAGE_SHIFT; /* in order-0 pages */
> > +       u32 i, j;
> >
> > -       for (i = 0; i < dma_map->dma_pages_cnt - 1; i++) {
> > -               if (dma_map->dma_pages[i] + PAGE_SIZE == dma_map->dma_pages[i + 1])
> > -                       dma_map->dma_pages[i] |= XSK_NEXT_PG_CONTIG_MASK;
> > -               else
> > -                       dma_map->dma_pages[i] &= ~XSK_NEXT_PG_CONTIG_MASK;
> > +       for (i = 0; i + stride < dma_map->dma_pages_cnt;) {
> > +               if (dma_map->dma_pages[i] + page_size == dma_map->dma_pages[i + stride]) {
> > +                       for (j = 0; j < stride; i++, j++)
> > +                               dma_map->dma_pages[i] |= XSK_NEXT_PG_CONTIG_MASK;
> > +               } else {
> > +                       for (j = 0; j < stride; i++, j++)
> > +                               dma_map->dma_pages[i] &= ~XSK_NEXT_PG_CONTIG_MASK;
> > +               }
>
> Still somewhat too conservative :-). If your page size is large you
> will waste a lot of the umem.  For the last page mark all the 4K
> "pages" that cannot cross the end of the umem due to the max size of a
> packet with the XSK_NEXT_PG_CONTIG_MASK bit. So you only need to add
> one more for-loop here to mark this, and then adjust the last for-loop
> below so it only marks the last bunch of 4K pages at the end of the
> umem as not contiguous.

I don't understand the issue. The XSK_NEXT_PG_CONTIG_MASK bit is only
looked at if the descriptor actually crosses a page boundary. I don't
think the current implementation wastes any UMEM.



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux