> On Feb 5, 2021, at 4:13 PM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote: > > On Fri, Feb 05, 2021 at 08:20:28PM +0000, Chuck Lever wrote: >> Baby steps. >> >> Because I'm perverse I started with bulk page freeing. In the course >> of trying to invent a new API, I discovered there already is a batch >> free_page() function called release_pages(). >> >> It seems to work as advertised for pages that are truly no longer >> in use (ie RPC/RDMA pages) but not for pages that are still in flight >> but released (ie TCP pages). >> >> release_pages() chains the pages in the passed-in array onto a list >> by their page->lru fields. This seems to be a problem if a page >> is still in use. > > I thought I remembered reading an lwn article about bulk page > allocation. Looking around now all I can see is > > https://lwn.net/Articles/684616/ > https://lwn.net/Articles/711075/ > > and I can't tell if any of that work was ever finished. Jesper is the engineer at Red Hat I alluded to earlier, and this is similar to what I discussed with him. I didn't see anything like alloc_pages_bulk() when sniffing around in v5.11-rc, but my search was not exhaustive. I think freeing pages, for NFSD, is complicated by the need to release pages that are still in use (either by the page cache or by the network layer). The page freeing logic in RPC/RDMA is freeing pages that are not in use by anyone, and I have a couple of clear approaches that eliminate the need for that logic completely. Therefore, IMO concentrating on making svc_alloc_arg() more efficient should provide the biggest bang for both socket and RDMA transports. -- Chuck Lever