Re: [PATCH v2 4/4] vfio/pci: Allow MMIO regions to be exported through dma-buf

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 6, 2022 at 2:48 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
>
> On Tue, Sep 06, 2022 at 12:38:44PM +0200, Christian König wrote:
> > Am 06.09.22 um 11:51 schrieb Christoph Hellwig:
> > > > +{
> > > > + struct vfio_pci_dma_buf *priv = dmabuf->priv;
> > > > + int rc;
> > > > +
> > > > + rc = pci_p2pdma_distance_many(priv->vdev->pdev, &attachment->dev, 1,
> > > > +                               true);
> > > This should just use pci_p2pdma_distance.
>
> OK
>
> > > > + /*
> > > > +  * Since the memory being mapped is a device memory it could never be in
> > > > +  * CPU caches.
> > > > +  */
> > > DMA_ATTR_SKIP_CPU_SYNC doesn't even apply to dma_map_resource, not sure
> > > where this wisdom comes from.
>
> Habana driver
I hate to throw the ball at someone else, but I actually copied the
code from the amdgpu driver, from amdgpu_vram_mgr_alloc_sgt() iirc.
And if you remember Jason, you asked why we use this specific define
in the original review you did and I replied the following (to which
you agreed and that's why we added the comment):

"The memory behind this specific dma-buf has *always* resided on the
device itself, i.e. it lives only in the 'device' domain (after all,
it maps a PCI bar address which points to the device memory).
Therefore, it was never in the 'CPU' domain and hence, there is no
need to perform a sync of the memory to the CPU's cache, as it was
never inside that cache to begin with.

This is not the same case as with regular memory which is dma-mapped
and then copied into the device using a dma engine. In that case,
the memory started in the 'CPU' domain and moved to the 'device'
domain. When it is unmapped it will indeed be recycled to be used
for another purpose and therefore we need to sync the CPU cache."

Oded
>
> > > > + dma_addr = dma_map_resource(
> > > > +         attachment->dev,
> > > > +         pci_resource_start(priv->vdev->pdev, priv->index) +
> > > > +                 priv->offset,
> > > > +         priv->dmabuf->size, dir, DMA_ATTR_SKIP_CPU_SYNC);
> > > This is not how P2P addresses are mapped.  You need to use
> > > dma_map_sgtable and have the proper pgmap for it.
> >
> > The problem is once more that this is MMIO space, in other words register
> > BARs which needs to be exported/imported.
> >
> > Adding struct pages for it generally sounds like the wrong approach here.
> > You can't even access this with the CPU or would trigger potentially
> > unwanted hardware actions.
>
> Right, this whole thing is the "standard" that dmabuf has adopted
> instead of the struct pages. Once the AMD GPU driver started doing
> this some time ago other drivers followed.
>
> Now we have struct pages, almost, but I'm not sure if their limits are
> compatible with VFIO? This has to work for small bars as well.
>
> Jason




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux