Re: [PATCH v1 11/15] io_uring/zcrx: implement zerocopy receive pp memory provider

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 10, 2024 at 1:53 PM Mina Almasry <almasrymina@xxxxxxxxxx> wrote:
>
> > >>>>
> > >>>> Once the user is done with the buffer processing, it must return it back
> > >>>> via the refill queue, from where our ->alloc_netmems implementation can
> > >>>> grab it, check references, put IO_ZC_RX_UREF, and recycle the buffer if
> > >>>> there are no more users left. As we place such buffers right back into
> > >>>> the page pools fast cache and they didn't go through the normal pp
> > >>>> release path, they are still considered "allocated" and no pp hold_cnt
> > >>>> is required.
> > >>>
> > >>> Why is this needed? In general the provider is to allocate free memory
> > >>
> > >> I don't get it, what "this"? If it's refill queue, that's because
> > >> I don't like actively returning buffers back via syscall / setsockopt
> > >> and trying to transfer them into the napi context (i.e.
> > >> napi_pp_put_page) hoping it works / cached well.
> > >>
> > >> If "this" is IO_ZC_RX_UREF, it's because we need to track when a
> > >> buffer is given to the userspace, and I don't think some kind of
> > >> map / xarray in the hot path is the best for performance solution.
> > >>
> > >
> > > Sorry I wasn't clear. By 'this' I'm referring to:
> > >
> > > "from where our ->alloc_netmems implementation can grab it, check
> > > references, put IO_ZC_RX_UREF, and recycle the buffer if there are no
> > > more users left"
> > >
> > > This is the part that I'm not able to stomach at the moment. Maybe if
> > > I look deeper it would make more sense, but my first feelings is that
> > > it's really not acceptable.
> > >
> > > alloc_netmems (and more generically page_pool_alloc_netmem), just
> > > allocates a netmem and gives it to the page_pool code to decide
> >
> > That how it works because that's how devmem needs it and you
> > tailored it, not the other way around. It could've pretty well
> > been a callback that fills the cache as an intermediate, from
> > where page pool can grab netmems and return back to the user,
> > and it would've been a pretty clean interface as well.
> >
>
> It could have been, but that would be a much worse design IMO. The
> whole point of memory proivders is that they provide memory to the
> page_pool and the page_pool does its things (among which is recycling)
> with that memory. In this patch you seem to have implemented a
> provider which, if the page is returned by io_uring, then it's not
> returned to the page_pool, it's returned directly to the provider. In
> other code paths the memory will be returned to the page_pool.
>
> I.e allocation is always:
> provider -> pp -> driver
>
> freeing from io_uring is:
> io_uring -> provider -> pp
>
> freeing from tcp stack or driver I'm guessing will be:
> tcp stack/driver -> pp -> provider
>
> I'm recommending that the model for memory providers must be in line
> with what we do for pages, devmem TCP, and Jakub's out of tree huge
> page provider (i.e. everything else using the page_pool). The model is
> the streamlined:
>
> allocation:
> provider -> pp -> driver
>
> freeing (always):
> tcp stack/io_uring/driver/whatever else -> pp -> driver
>

Should be:

> freeing (always):
> tcp stack/io_uring/driver/whatever else -> pp -> provider

I.e. the pp frees to the provider, not the driver.

-- 
Thanks,
Mina





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux