Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for Marvell OCTEON DPU devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 16, 2024 at 11:17:48AM +0800, Jason Wang wrote:
> On Mon, Apr 15, 2024 at 8:42 PM Srujana Challa <schalla@xxxxxxxxxxx> wrote:
> >
> > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for Marvell
> > > OCTEON DPU devices
> > >
> > > On Fri, Apr 12, 2024 at 5:49 PM Srujana Challa <schalla@xxxxxxxxxxx> wrote:
> > > >
> > > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver for
> > > > > Marvell OCTEON DPU devices
> > > > >
> > > > > On Fri, Apr 12, 2024 at 1:13 PM Srujana Challa <schalla@xxxxxxxxxxx>
> > > wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jason Wang <jasowang@xxxxxxxxxx>
> > > > > > > Sent: Thursday, April 11, 2024 11:32 AM
> > > > > > > To: Srujana Challa <schalla@xxxxxxxxxxx>
> > > > > > > Cc: Michael S. Tsirkin <mst@xxxxxxxxxx>;
> > > > > > > virtualization@xxxxxxxxxxxxxxx; xuanzhuo@xxxxxxxxxxxxxxxxx;
> > > > > > > Vamsi Krishna Attunuru <vattunuru@xxxxxxxxxxx>; Shijith Thotton
> > > > > > > <sthotton@xxxxxxxxxxx>; Nithin Kumar Dabilpuram
> > > > > > > <ndabilpuram@xxxxxxxxxxx>; Jerin Jacob <jerinj@xxxxxxxxxxx>;
> > > > > > > eperezma <eperezma@xxxxxxxxxx>
> > > > > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA driver
> > > > > > > for Marvell OCTEON DPU devices
> > > > > > >
> > > > > > > On Wed, Apr 10, 2024 at 8:35 PM Srujana Challa
> > > > > > > <schalla@xxxxxxxxxxx>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Subject: Re: [EXTERNAL] Re: [PATCH] virtio: vdpa: vDPA
> > > > > > > > > driver for Marvell OCTEON DPU devices
> > > > > > > > >
> > > > > > > > > On Wed, Apr 10, 2024 at 10:15:37AM +0000, Srujana Challa wrote:
> > > > > > > > > > > > > > +
> > > > > > > > > > > > > > +       domain = iommu_get_domain_for_dev(dev);
> > > > > > > > > > > > > > +       if (!domain || domain->type ==
> > > > > > > > > > > > > > + IOMMU_DOMAIN_IDENTITY)
> > > > > > > {
> > > > > > > > > > > > > > +               dev_info(dev, "NO-IOMMU\n");
> > > > > > > > > > > > > > +               octep_vdpa_ops.set_map =
> > > > > > > > > > > > > > + octep_vdpa_set_map;
> > > > > > > > > > > > >
> > > > > > > > > > > > > Is this a shortcut to have get better performance?
> > > > > > > > > > > > > DMA API should have those greacefully I think.
> > > > > > > > > > > > When IOMMU is disabled on host and set_map/dma_map is
> > > > > > > > > > > > not set, vhost-vdpa is reporting an error "Failed to
> > > > > > > > > > > > allocate domain, device is not
> > > > > > > > > > > IOMMU cache coherent capable\n".
> > > > > > > > > > > > Hence we are doing this way to get better performance.
> > > > > > > > > > >
> > > > > > > > > > > The problem is, assuming the device does not have any
> > > > > > > > > > > internal
> > > > > > > IOMMU.
> > > > > > > > > > >
> > > > > > > > > > > 1) If we allow it running without IOMMU, it opens a
> > > > > > > > > > > window for guest to attack the host.
> > > > > > > > > > > 2) If you see perforamnce issue with
> > > > > > > > > > > IOMMU_DOMAIN_IDENTITY, let's report it to DMA/IOMMU
> > > > > > > > > > > maintiner to fix that
> > > > > > > > > > It will be helpful for host networking case when iommu is disabled.
> > > > > > > > > > Can we take the vfio pci driver approach as a reference
> > > > > > > > > > where user explicitly set "enable_unsafe_noiommu_mode"
> > > > > > > > > > using module
> > > > > param?
> > > > > > > > >
> > > > > > > > > vfio is a userspace driver so it's userspace's responsibility.
> > > > > > > > > what exactly ensures correctness here? does the device have
> > > > > > > > > an on-chip iommu?
> > > > > > > > >
> > > > > > > > Our device features an on-chip IOMMU, although it is not
> > > > > > > > utilized for host-side targeted DMA operations. We included
> > > > > > > > no-iommu mode in our driver to ensure that host applications,
> > > > > > > > such as DPDK Virtio user PMD, continue to function even when
> > > > > > > > operating in a no-
> > > > > IOMMU mode.
> > > > > > >
> > > > > > > I may miss something but set_map() is empty in this driver. How
> > > > > > > could such isolation be done?
> > > > > >
> > > > > > In no-iommu case, there would be no domain right, and the user of
> > > > > > vhost-vdpa(DPDK virtio user pmd), would create the mapping and
> > > > > > pass the PA (= IOVA) to the device directly. So that, device can
> > > > > > directly DMA to the
> > > > > PA.
> > > > >
> > > > > Yes, but this doesn't differ too much from the case where DMA API is
> > > > > used with IOMMU disabled.
> > > > >
> > > > > Are you saying DMA API introduces overheads in this case?
> > > > No actually, current vhost-vdpa code is not allowing IOMMU disabled
> > > > mode, If set_map/dma_map op is not set. Hence, we are setting set_map
> > > > with dummy api to allow IOMMU disabled mode.
> > > >
> > > > Following is the code snippet from drivers/vhost/vdpa.c
> > > >
> > > >       /* Device want to do DMA by itself */
> > > >         if (ops->set_map || ops->dma_map)
> > > >                 return 0;
> > > >
> > > >         bus = dma_dev->bus;
> > > >         if (!bus)
> > > >                 return -EFAULT;
> > > >
> > > >        if (!device_iommu_capable(dma_dev,
> > > IOMMU_CAP_CACHE_COHERENCY))
> > > >                 return -ENOTSUPP;
> > >
> > > Right, so here's the question.
> > >
> > > When IOMMU is disabled, if there's no isolation from the device on-chip
> > > IOMMU. It might have security implications. For example if we're using PA,
> > > userspace could attack the kernel.
> > >
> > > So there should be some logic in the set_map() to program the on-chip
> > > IOMMU to isolate DMA in that case but I don't see such implementation done
> > > in set_map().
> >
> > Our chip lacks support for on-chip IOMMU for host-side targeted DMA operations.
> > When using the DPDK virtio user PMD, we’ve noticed a significant 80% performance
> > improvement when IOMMU is disabled on specific x86 machines. This performance
> > improvement can be leveraged by embedded platforms where applications run in
> > controlled environment.
> > May be it's a trade-off between security and performance.
> >
> > We can disable the no-iommu support by default and enable it through some module
> > parameter and taint the kernel similar to VFIO driver(enable_unsafe_noiommu_mode) right?
> 
> Could be one way.
> 
> Michael, any thoughts on this?
> 
> Thanks

My thought is there's nothing special about the Marvell chip here.
Merge it normally. Then if you like work on a no-iommu mode in vdpa.


> > >
> > > >
> > > > Performance degradation when iommu enabled is not with DMA API but the
> > > > x86 HW IOMMU translation performance on certain low end x86 machines.
> > >
> > > This might be true but it's not specific to vDPA I think?
> > >
> > > Thanks
> > >
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > >
> > > > > > > > We observed performance impacts on certain low-end x86
> > > > > > > > machines when IOMMU mode was enabled.
> > > > > > > > I think, correctness is Host userspace application's
> > > > > > > > responsibility, in this case when vhost-vdpa is used with Host
> > > > > > > > application such as DPDK
> > > > > > > Virtio user PMD.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> >





[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux