Re: Enabling peer to peer device transactions for PCIe devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2016-11-23 02:32 PM, Jason Gunthorpe wrote:
On Wed, Nov 23, 2016 at 02:14:40PM -0500, Serguei Sagalovitch wrote:
On 2016-11-23 02:05 PM, Jason Gunthorpe wrote:

      
As Bart says, it would be best to be combined with something like
Mellanox's ODP MRs, which allows a page to be evicted and then trigger
a CPU interrupt if a DMA is attempted so it can be brought back.

      
Please note that in the general case (including  MR one) we could have
"page fault" from the different PCIe device. So all  PCIe device must
be synchronized.
Standard RDMA MRs require pinned pages, the DMA address cannot change
while the MR exists (there is no hardware support for this at all), so
page faulting from any other device is out of the question while they
exist. This is the same requirement as typical simple driver DMA which
requires pages pinned until the simple device completes DMA.

ODP RDMA MRs do not require that, they just page fault like the CPU or
really anything and the kernel has to make sense of concurrant page
faults from multiple sources.

The upshot is that GPU scenarios that rely on highly dynamic
virtual->physical translation cannot sanely be combined with standard
long-life RDMA MRs.
We do not want to have "highly" dynamic translation due to performance cost.
We need to support "overcommit" but would like to minimize impact.

To support RDMA MRs for GPU/VRAM/PCIe device memory (which is must)
we need either globally force  pinning for the scope of
"get_user_pages() / "put_pages" or have special handling for RDMA MRs and
similar cases.  Generally it could be difficult to correctly handle "DMA in progress"
 due to the  facts that (a) DMA could originate  from numerous PCIe devices
simultaneously including requests to receive network data. (b) in HSA case DMA could
 originated from user space without kernel driver knowledge.
So without corresponding h/w support everywhere I do not see how it could
be solved effectively.
Certainly, any solution for GPUs must follow the typical page pinning
semantics, changing the DMA address of a page must be blocked while
any DMA is in progress.
Does HMM solve the peer-peer problem? Does it do it generically or
only for drivers that are mirroring translation tables?

      
In current form HMM doesn't solve peer-peer problem. Currently it allow
"mirroring" of  "malloc" memory on GPU which is not always what needed.
Additionally  there is need to have opportunity to share VRAM allocations
between  different processes.
Humm, so it can be removed from Alexander's list then :\
HMM is very useful for some type of scenarios as well as it could significantly
simplify (for performance) implementations of some features e.g. OpenCL 2.0 SVM.

As Dan suggested, maybe we need to do both. Some kind of fix for
get_user_pages() for smaller mappings (eg ZONE_DEVICE) and a mandatory
API conversion to get_user_dma_sg() for other cases?

Jason

Sincerely yours,
Serguei Sagalovitch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux