Re: [Linaro-mm-sig] [PATCH v3 1/2] habanalabs: define uAPI to export FD for DMA-BUF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 22.06.21 um 14:01 schrieb Jason Gunthorpe:
On Tue, Jun 22, 2021 at 11:42:27AM +0300, Oded Gabbay wrote:
On Tue, Jun 22, 2021 at 9:37 AM Christian König
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
Am 22.06.21 um 01:29 schrieb Jason Gunthorpe:
On Mon, Jun 21, 2021 at 10:24:16PM +0300, Oded Gabbay wrote:

Another thing I want to emphasize is that we are doing p2p only
through the export/import of the FD. We do *not* allow the user to
mmap the dma-buf as we do not support direct IO. So there is no access
to these pages through the userspace.
Arguably mmaping the memory is a better choice, and is the direction
that Logan's series goes in. Here the use of DMABUF was specifically
designed to allow hitless revokation of the memory, which this isn't
even using.
The major problem with this approach is that DMA-buf is also used for
memory which isn't CPU accessible.
That isn't an issue here because the memory is only intended to be
used with P2P transfers so it must be CPU accessible.

No, especially P2P is often done on memory resources which are not even remotely CPU accessible.

That's one of the major reasons why we use P2P in the first place. See the whole XGMI implementation for example.

Thanks Jason for the clarification, but I honestly prefer to use
DMA-BUF at the moment.
It gives us just what we need (even more than what we need as you
pointed out), it is *already* integrated and tested in the RDMA
subsystem, and I'm feeling comfortable using it as I'm somewhat
familiar with it from my AMD days.
That was one of the reasons we didn't even considered using the mapping
memory approach for GPUs.
Well, now we have DEVICE_PRIVATE memory that can meet this need
too.. Just nobody has wired it up to hmm_range_fault()

So you are taking the hit of very limited hardware support and reduced
performance just to squeeze into DMABUF..
You still have the issue that this patch is doing all of this P2P
stuff wrong - following the already NAK'd AMD approach.

Well that stuff was NAKed because we still use sg_tables, not because we don't want to allocate struct pages.

The plan is to push this forward since DEVICE_PRIVATE clearly can't handle all of our use cases and is not really a good fit to be honest.

IOMMU is now working as well, so as far as I can see we are all good here.

I'll go and read Logan's patch-set to see if that will work for us in
the future. Please remember, as Daniel said, we don't have struct page
backing our device memory, so if that is a requirement to connect to
Logan's work, then I don't think we will want to do it at this point.
It is trivial to get the struct page for a PCI BAR.

Yeah, but it doesn't make much sense. Why should we create a struct page for something that isn't even memory in a lot of cases?

Regards,
Christian.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux