On Mon, May 29, 2023 at 9:33 AM Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote: > > On 2023/5/26 13:46, Liang Chen wrote: > > ... > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index 99c0ca0c1781..ac40b8c66c59 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -32,7 +32,9 @@ module_param(gso, bool, 0444); > > module_param(napi_tx, bool, 0644); > > > > static bool page_pool_enabled; > > +static bool page_pool_frag; > > module_param(page_pool_enabled, bool, 0400); > > +module_param(page_pool_frag, bool, 0400); > > The below patchset unifies the frag and non-frag page for > page_pool_alloc_frag() API, perhaps it would simplify the > driver's support of page pool. > > https://patchwork.kernel.org/project/netdevbpf/cover/20230526092616.40355-1-linyunsheng@xxxxxxxxxx/ > Thanks for the information and the work to make driver support easy. I will rebase accordingly after it lands. > > > > ... > > > @@ -1769,13 +1788,29 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, > > */ > > len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room); > > if (rq->page_pool) { > > - struct page *page; > > + if (rq->page_pool->p.flags & PP_FLAG_PAGE_FRAG) { > > + if (unlikely(!page_pool_dev_alloc_frag(rq->page_pool, > > + &pp_frag_offset, len + room))) > > + return -ENOMEM; > > + buf = (char *)page_address(rq->page_pool->frag_page) + > > + pp_frag_offset; > > + buf += headroom; /* advance address leaving hole at front of pkt */ > > + hole = (PAGE_SIZE << rq->page_pool->p.order) > > + - rq->page_pool->frag_offset; > > + if (hole < len + room) { > > + if (!headroom) > > + len += hole; > > + rq->page_pool->frag_offset += hole; > > Is there any reason why the driver need to be aware of page_pool->frag_offset? > Isn't the page_pool_dev_alloc_frag() will drain the last page for you when > page_pool_dev_alloc_frag() is called with size being 'len + room' later? > One case I can think of needing this is to have an accurate truesize report > for skb, but I am not sure it matters that much as 'struct page_frag_cache' > and 'page_frag' implementation both have a similar problem. > Yeah, as you pointed out page_pool_dev_alloc_frag will drain the page itself, so does skb_page_frag_refill. This is trying to keep the logic consistent with non page pool case where the hole was skipped and included in buffer len. > > + } > > + } else { > > + struct page *page; > > > > - page = page_pool_dev_alloc_pages(rq->page_pool); > > - if (unlikely(!page)) > > - return -ENOMEM; > > - buf = (char *)page_address(page); > > - buf += headroom; /* advance address leaving hole at front of pkt */ > > + page = page_pool_dev_alloc_pages(rq->page_pool); > > + if (unlikely(!page)) > > + return -ENOMEM; > > + buf = (char *)page_address(page); > > + buf += headroom; /* advance address leaving hole at front of pkt */ > > + } > > } else { > > if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp))) > > return -ENOMEM; > > @@ -3800,13 +3835,16 @@ static void virtnet_alloc_page_pool(struct receive_queue *rq) > > struct virtio_device *vdev = rq->vq->vdev; > > > > struct page_pool_params pp_params = { > > - .order = 0, > > + .order = page_pool_frag ? SKB_FRAG_PAGE_ORDER : 0, > > .pool_size = rq->vq->num_max, > > If it using order SKB_FRAG_PAGE_ORDER page, perhaps pool_size does > not have to be rq->vq->num_max? Even for order 0 page, perhaps the > pool_size does not need to be as big as rq->vq->num_max? > Thanks for pointing this out! pool_size will be lowered to a more appropriate value on v2. > > .nid = dev_to_node(vdev->dev.parent), > > .dev = vdev->dev.parent, > > .offset = 0, > > }; > > > > + if (page_pool_frag) > > + pp_params.flags |= PP_FLAG_PAGE_FRAG; > > + > > rq->page_pool = page_pool_create(&pp_params); > > if (IS_ERR(rq->page_pool)) { > > dev_warn(&vdev->dev, "page pool creation failed: %ld\n", > > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization