Re: [PATCH 2/6] virtio: split: alloc indirect desc with extra

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 11 Jan 2022 10:44:52 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
> On Mon, Jan 10, 2022 at 5:29 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote:
> >
> > On Mon, 10 Jan 2022 16:54:58 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
> > > On Mon, Jan 10, 2022 at 3:59 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote:
> > > >
> > > > On Mon, 10 Jan 2022 15:41:27 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
> > > > > On Mon, Jan 10, 2022 at 3:24 PM Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Mon, 10 Jan 2022 14:43:39 +0800, Jason Wang <jasowang@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > 在 2022/1/7 下午2:33, Xuan Zhuo 写道:
> > > > > > > > In the scenario where indirect is not used, each desc corresponds to an
> > > > > > > > extra, which is used to record information such as dma, flags, and
> > > > > > > > next.
> > > > > > > >
> > > > > > > > In the scenario of using indirect, the assigned desc does not have the
> > > > > > > > corresponding extra record dma information, and the dma information must
> > > > > > > > be obtained from the desc when unmap.
> > > > > > > >
> > > > > > > > This patch allocates the corresponding extra array when indirect desc is
> > > > > > > > allocated. This has these advantages:
> > > > > > > > 1. Record the dma information of desc, no need to read desc when unmap
> > > > > > > > 2. It will be more convenient and unified in processing
> > > > > > > > 3. Some additional information can be recorded in extra, which will be
> > > > > > > >     used in subsequent patches.
> > > > > > >
> > > > > > >
> > > > > > > Two questions:
> > > > > > >
> > > > > > > 1) Is there any performance number for this change? I guess it gives
> > > > > > > more stress on the cache.
> > > > > >
> > > > > > I will add performance test data in the next version.
> > > > > >
> > > > > > > 2) Is there a requirement to mix the pre mapped sg with unmapped sg? If
> > > > > > > not, a per virtqueue flag looks sufficient
> > > > > >
> > > > > > There is this requirement. For example, in the case of AF_XDP, a patcket
> > > > > > contains two parts, one is virtio_net_hdr, and the other is the actual data
> > > > > > packet from AF_XDP. The former is unmapped sg, and the latter is pre mapped sg.
> > > > >
> > > > > Any chance to map virtio_net_hdr() manually by AF_XDP routine in this case?
> > > >
> > > > Well, it is indeed possible to do so. In the indirect scenario, we can record it
> > > > in vring->split.desc_extra[head].flags
> > > >
> > > > Then we have to agree that there can be no mixed situation.
> > >
> > > I think it would be easier and less performance regression if we don't
> > > do huge changes in the core unless it's a must.
> > >
> >
> > Ok, I plan to add two new interface virtqueue_add_outbuf_flag(),
> > virtqueue_add_inbuf_flag() pass a flag parameter to virtqueue_add() to
> > mark sgs addr is predma.
> >
> > I don't want to use sg->dma_address, so we have to check whether each sg uses
> > dma_address. If it is not unified, we will also handle exception.
> >
> > > Btw, I forgot the conclusion of the last AF_XDP series. Why is it
> > > better to change virtio_ring instead of AF_XDP (which seems easier).
> >
> > Regarding this question, I'm guessing you mean to make AF_XDP not use DMA
> > addresses? Instead pass virtual addresses to virtio.
> >
> > It would certainly be simpler, but I think there is a performance gain in doing
> > the DMA mapping ahead of time.
>
> Any reason for this? Is it just because AF_XDP did batch mapping?
>

AF_XDP completes all DMA maps during initialization, and virtio currently
executes a DMA map every time virtqueue_add() is executed.

If we support advance DMA, there will be no DMA overhead every time
virtqueue_add() is executed.

> Actually there's another subtle difference, AF_XDP tends to use
> DMA_BIDIRECTIONAL but the virtio_ring map depends on the in/out.

Yes.

Thanks.

>
> Thanks
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux