On Tue, 18 Jun 2024 08:57:52 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote: > On Mon, Jun 17, 2024 at 3:39 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > > > On Mon, 17 Jun 2024 13:00:13 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > On Fri, Jun 14, 2024 at 2:39 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > > > > > > > > If the xsk is enabling, the xsk tx will share the send queue. > > > > But the xsk requires that the send queue use the premapped mode. > > > > So the send queue must support premapped mode when it is bound to > > > > af-xdp. > > > > > > > > * virtnet_sq_set_premapped(sq, true) is used to enable premapped mode. > > > > > > > > In this mode, the driver will record the dma info when skb or xdp > > > > frame is sent. > > > > > > > > Currently, the SQ premapped mode is operational only with af-xdp. In > > > > this mode, af-xdp, the kernel stack, and xdp tx/redirect will share > > > > the same SQ. Af-xdp independently manages its DMA. The kernel stack > > > > and xdp tx/redirect utilize this DMA metadata to manage the DMA > > > > info. > > > > > > > > If the indirect descriptor feature be supported, the volume of DMA > > > > details we need to maintain becomes quite substantial. Here, we have > > > > a cap on the amount of DMA info we manage. > > > > > > > > If the kernel stack and xdp tx/redirect attempt to use more > > > > descriptors, virtnet_add_outbuf() will return an -ENOMEM error. But > > > > the af-xdp can work continually. > > > > > > Rethink of this whole logic, it looks like all the complication came > > > as we decided to go with a pre queue pre mapping flag. I wonder if > > > things could be simplified if we do that per buffer? > > > > YES. That will be simply. > > > > Then this patch will be not needed. The virtio core must record the premapped > > imfo to the virtio ring state or extra. > > > > http://lore.kernel.org/all/20230517022249.20790-6-xuanzhuo@xxxxxxxxxxxxxxxxx > > Yes, something like this. I think it's worthwhile to re-consider that > approach. If my memory is correct, we haven't spotted the complicated > issues we need to deal with like this patch. > > > > > > > > > Then we don't need complex logic like dmainfo and cap. > > > > So the premapped mode and the internal dma mode can coexist. > > Then we do not need to make the sq to support the premapped mode. > > Probably. > > > > > > > > > > > > > > > > * virtnet_sq_set_premapped(sq, false) is used to disable premapped mode. > > > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > > > > --- > > > > drivers/net/virtio_net.c | 228 ++++++++++++++++++++++++++++++++++++++- > > > > 1 file changed, 224 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > > > index e84a4624549b..88ab9ea1646f 100644 > > > > --- a/drivers/net/virtio_net.c > > > > +++ b/drivers/net/virtio_net.c > > > > @@ -25,6 +25,7 @@ > > > > #include <net/net_failover.h> > > > > #include <net/netdev_rx_queue.h> > > > > #include <net/netdev_queues.h> > > > > +#include <uapi/linux/virtio_ring.h> > > > > > > Why do we need this? > > > > for using VIRTIO_RING_F_INDIRECT_DESC > > Ok. It's probably a hint that something like layer violation happens. > A specific driver should not know details about the ring layout ... But the blk device did the same thing. > > > > > > > > > > > > > > > > static int napi_weight = NAPI_POLL_WEIGHT; > > > > module_param(napi_weight, int, 0444); > > > > @@ -276,6 +277,26 @@ struct virtnet_rq_dma { > > > > u16 need_sync; > > > > }; > > > > > > > > +struct virtnet_sq_dma { > > > > + union { > > > > + struct llist_node node; > > > > + struct llist_head head; > > > > > > If we want to cap the #dmas, could we simply use an array instead of > > > the list here? > > > > > > > + void *data; > > > > + }; > > > > + dma_addr_t addr; > > > > + u32 len; > > > > + u8 num; > > > > +}; > > > > + > > > > +struct virtnet_sq_dma_info { > > > > + /* record for kfree */ > > > > + void *p; > > > > + > > > > + u32 free_num; > > > > + > > > > + struct llist_head free; > > > > +}; > > > > + > > > > /* Internal representation of a send virtqueue */ > > > > struct send_queue { > > > > /* Virtqueue associated with this send _queue */ > > > > @@ -295,6 +316,11 @@ struct send_queue { > > > > > > > > /* Record whether sq is in reset state. */ > > > > bool reset; > > > > + > > > > + /* SQ is premapped mode or not. */ > > > > + bool premapped; > > > > + > > > > + struct virtnet_sq_dma_info dmainfo; > > > > }; > > > > > > > > /* Internal representation of a receive virtqueue */ > > > > @@ -492,9 +518,11 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf); > > > > enum virtnet_xmit_type { > > > > VIRTNET_XMIT_TYPE_SKB, > > > > VIRTNET_XMIT_TYPE_XDP, > > > > + VIRTNET_XMIT_TYPE_DMA, > > > > > > I think the name is confusing, how about TYPE_PREMAPPED? > > > > > > > }; > > > > > > > > -#define VIRTNET_XMIT_TYPE_MASK (VIRTNET_XMIT_TYPE_SKB | VIRTNET_XMIT_TYPE_XDP) > > > > +#define VIRTNET_XMIT_TYPE_MASK (VIRTNET_XMIT_TYPE_SKB | VIRTNET_XMIT_TYPE_XDP \ > > > > + | VIRTNET_XMIT_TYPE_DMA) > > > > > > > > static enum virtnet_xmit_type virtnet_xmit_ptr_strip(void **ptr) > > > > { > > > > @@ -510,12 +538,180 @@ static void *virtnet_xmit_ptr_mix(void *ptr, enum virtnet_xmit_type type) > > > > return (void *)((unsigned long)ptr | type); > > > > } > > > > > > > > +static void virtnet_sq_unmap(struct send_queue *sq, void **data) > > > > +{ > > > > + struct virtnet_sq_dma *head, *tail, *p; > > > > + int i; > > > > + > > > > + head = *data; > > > > + > > > > + p = head; > > > > + > > > > + for (i = 0; i < head->num; ++i) { > > > > + virtqueue_dma_unmap_page_attrs(sq->vq, p->addr, p->len, > > > > + DMA_TO_DEVICE, 0); > > > > + tail = p; > > > > + p = llist_entry(llist_next(&p->node), struct virtnet_sq_dma, node); > > > > + } > > > > + > > > > + *data = tail->data; > > > > + > > > > + __llist_add_batch(&head->node, &tail->node, &sq->dmainfo.free); > > > > + > > > > + sq->dmainfo.free_num += head->num; > > > > +} > > > > + > > > > +static void *virtnet_dma_chain_update(struct send_queue *sq, > > > > + struct virtnet_sq_dma *head, > > > > + struct virtnet_sq_dma *tail, > > > > + u8 num, void *data) > > > > +{ > > > > + sq->dmainfo.free_num -= num; > > > > + head->num = num; > > > > + > > > > + tail->data = data; > > > > + > > > > + return virtnet_xmit_ptr_mix(head, VIRTNET_XMIT_TYPE_DMA); > > > > +} > > > > + > > > > +static struct virtnet_sq_dma *virtnet_sq_map_sg(struct send_queue *sq, int num, void *data) > > > > +{ > > > > + struct virtnet_sq_dma *head = NULL, *p = NULL; > > > > + struct scatterlist *sg; > > > > + dma_addr_t addr; > > > > + int i, err; > > > > + > > > > + if (num > sq->dmainfo.free_num) > > > > + return NULL; > > > > + > > > > + for (i = 0; i < num; ++i) { > > > > + sg = &sq->sg[i]; > > > > + > > > > + addr = virtqueue_dma_map_page_attrs(sq->vq, sg_page(sg), > > > > + sg->offset, > > > > + sg->length, DMA_TO_DEVICE, > > > > + 0); > > > > + err = virtqueue_dma_mapping_error(sq->vq, addr); > > > > + if (err) > > > > + goto err; > > > > + > > > > + sg->dma_address = addr; > > > > + > > > > + p = llist_entry(llist_del_first(&sq->dmainfo.free), > > > > + struct virtnet_sq_dma, node); > > > > + > > > > + p->addr = sg->dma_address; > > > > + p->len = sg->length; > > > > > > I may miss something, but I don't see how we cap the total number of dmainfos. > > > > static void *virtnet_dma_chain_update(struct send_queue *sq, > > struct virtnet_sq_dma *head, > > struct virtnet_sq_dma *tail, > > u8 num, void *data) > > { > > sq->dmainfo.free_num -= num; > > -> head->num = num; > > > > tail->data = data; > > > > return virtnet_xmit_ptr_mix(head, VIRTNET_XMIT_TYPE_DMA); > > } > > Ok, speak too fast I guess it should be more like: > > if (num > sq->dmainfo.free_num) > return NULL; static struct virtnet_sq_dma *virtnet_sq_map_sg(struct send_queue *sq, int num, void *data) { struct virtnet_sq_dma *head = NULL, *p = NULL; struct scatterlist *sg; dma_addr_t addr; int i, err; if (num > sq->dmainfo.free_num) return NULL; Do you mean this? Thanks. > > Thanks > > > > > > > > > Thanks. > > > > > > > > Thanks > > > > > >