On 02/04/18 11:20 AM, Jerome Glisse wrote: > The point i have been trying to get accross is that you do have this > information with dma_map_resource() you know the device to which you > are trying to map (dev argument to dma_map_resource()) and you can > easily get the device to which the memory belongs because you have the > CPU physical address of the memory hence you can lookup the resource > and get the device from that. How do you go from a physical address to a struct device generally and in a performant manner? > IIRC CAPI make P2P mandatory but maybe this is with NVLink. We can ask > the PowerPC folks to confirm. Note CAPI is Power8 and newer AFAICT. PowerPC folks recently told us specifically that Power9 does not support P2P between PCI root ports. I've said this many times. CAPI has nothing to do with it. > Mapping to userspace have nothing to do here. I am talking at hardware > level. How thing are expose to userspace is a completely different > problems that do not have one solution fit all. For GPU you want this > to be under total control of GPU drivers. For storage like persistent > memory, you might want to expose it userspace more directly ... My understanding (and I worked on this a while ago) is that CAPI hardware manages memory maps typically for userspace memory. When a userspace program changes it's mapping, the CAPI hardware is updated so that hardware is coherent with the user address space and it is safe to DMA to any address without having to pin memory. (This is very similar to ODP in RNICs.) This is *really* nice but doesn't solve *any* of the problems we've been discussing. Moreover, many developers want to keep P2P in-kernel, for the time being, where the problem of pinning memory does not exist. Logan