Re: [RFC PATCH 3/5] mm/vma: add support for peer to peer to device vma

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 29, 2019 at 02:11:23PM -0500, Jerome Glisse wrote:
> On Tue, Jan 29, 2019 at 11:36:29AM -0700, Logan Gunthorpe wrote:
> > 
> > 
> > On 2019-01-29 10:47 a.m., jglisse@xxxxxxxxxx wrote:
> > 
> > > +	/*
> > > +	 * Optional for device driver that want to allow peer to peer (p2p)
> > > +	 * mapping of their vma (which can be back by some device memory) to
> > > +	 * another device.
> > > +	 *
> > > +	 * Note that the exporting device driver might not have map anything
> > > +	 * inside the vma for the CPU but might still want to allow a peer
> > > +	 * device to access the range of memory corresponding to a range in
> > > +	 * that vma.
> > > +	 *
> > > +	 * FOR PREDICTABILITY IF DRIVER SUCCESSFULY MAP A RANGE ONCE FOR A
> > > +	 * DEVICE THEN FURTHER MAPPING OF THE SAME IF THE VMA IS STILL VALID
> > > +	 * SHOULD ALSO BE SUCCESSFUL. Following this rule allow the importing
> > > +	 * device to map once during setup and report any failure at that time
> > > +	 * to the userspace. Further mapping of the same range might happen
> > > +	 * after mmu notifier invalidation over the range. The exporting device
> > > +	 * can use this to move things around (defrag BAR space for instance)
> > > +	 * or do other similar task.
> > > +	 *
> > > +	 * IMPORTER MUST OBEY mmu_notifier NOTIFICATION AND CALL p2p_unmap()
> > > +	 * WHEN A NOTIFIER IS CALL FOR THE RANGE ! THIS CAN HAPPEN AT ANY
> > > +	 * POINT IN TIME WITH NO LOCK HELD.
> > > +	 *
> > > +	 * In below function, the device argument is the importing device,
> > > +	 * the exporting device is the device to which the vma belongs.
> > > +	 */
> > > +	long (*p2p_map)(struct vm_area_struct *vma,
> > > +			struct device *device,
> > > +			unsigned long start,
> > > +			unsigned long end,
> > > +			dma_addr_t *pa,
> > > +			bool write);
> > > +	long (*p2p_unmap)(struct vm_area_struct *vma,
> > > +			  struct device *device,
> > > +			  unsigned long start,
> > > +			  unsigned long end,
> > > +			  dma_addr_t *pa);
> > 
> > I don't understand why we need new p2p_[un]map function pointers for
> > this. In subsequent patches, they never appear to be set anywhere and
> > are only called by the HMM code. I'd have expected it to be called by
> > some core VMA code and set by HMM as that's what vm_operations_struct is
> > for.
> > 
> > But the code as all very confusing, hard to follow and seems to be
> > missing significant chunks. So I'm not really sure what is going on.
> 
> It is set by device driver when userspace do mmap(fd) where fd comes
> from open("/dev/somedevicefile"). So it is set by device driver. HMM
> has nothing to do with this. It must be set by device driver mmap
> call back (mmap callback of struct file_operations). For this patch
> you can completely ignore all the HMM patches. Maybe posting this as
> 2 separate patchset would make it clearer.
> 
> For instance see [1] for how a non HMM driver can export its memory
> by just setting those callback. Note that a proper implementation of
> this should also include some kind of driver policy on what to allow
> to map and what to not allow ... All this is driver specific in any
> way.

I'm imagining that the RDMA drivers would use this interface on their
per-process 'doorbell' BAR pages - we also wish to have P2P DMA to
this memory. Also the entire VFIO PCI BAR mmap would be good to cover
with this too.

Jerome, I think it would be nice to have a helper scheme - I think the
simple case would be simple remapping of PCI BAR memory, so if we
could have, say something like:

static const struct vm_operations_struct my_ops {
  .p2p_map = p2p_ioremap_map_op,
  .p2p_unmap = p2p_ioremap_unmap_op,
}

struct ioremap_data {
  [..]
}

fops_mmap() {
   vma->private_data = &driver_priv->ioremap_data;
   return p2p_ioremap_device_memory(vma, exporting_device, [..]);
}

Which closely matches at least what the RDMA drivers do. Where
p2p_ioremap_device_memory populates p2p_map and p2p_unmap pointers
with sensible functions, etc.

It looks like vfio would be able to use this as well (though I am
unsure why vfio uses remap_pfn_range instead of io_remap_pfn range for
BAR memory..)

Do any drivers need more control than this?

Jason
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux