Re: [RFC 1/5] mm/hmm: HMM API to enable P2P DMA for device private pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 28, 2025 at 01:21:23PM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 28, 2025 at 05:32:23PM +0100, Thomas Hellström wrote:
> > > This series supports three case:
> > > 
> > >  1) pgmap->owner == range->dev_private_owner
> > >     This is "driver private fast interconnect" in this case HMM
> > > should
> > >     immediately return the page. The calling driver understands the
> > >     private parts of the pgmap and computes the private interconnect
> > >     address.
> > > 
> > >     This requires organizing your driver so that all private
> > >     interconnect has the same pgmap->owner.
> > 
> > Yes, although that makes this map static, since pgmap->owner has to be
> > set at pgmap creation time. and we were during initial discussions
> > looking at something dynamic here. However I think we can probably do
> > with a per-driver owner for now and get back if that's not sufficient.
> 
> The pgmap->owner doesn't *have* to fixed, certainly during early boot before
> you hand out any page references it can be changed. I wouldn't be
> surprised if this is useful to some requirements to build up the
> private interconnect topology?

The trouble I'm seeing is device probe and the fundemantal issue that you
never know when you're done. And so if we entirely rely on pgmap->owner to
figure out the driver private interconnect topology, that's going to be
messy. That's why I'm also leaning towards both comparing owners and
having an additional check whether the interconnect is actually there or
not yet.

You can fake that by doing these checks after hmm_range_fault returned,
and if you get a bunch of unsuitable pages, toss it back to
hmm_range_fault asking for an unconditional migration to system memory for
those. But that's kinda not great and I think goes at least against the
spirit of how you want to handle pci p2p in step 2 below?

Cheers, Sima

> > >  2) The page is DEVICE_PRIVATE and get_dma_pfn_for_device() exists.
> > >     The exporting driver has the option to return a P2P struct page
> > >     that can be used for PCI P2P without any migration. In a PCI GPU
> > >     context this means the GPU has mapped its local memory to a PCI
> > >     address. The assumption is that P2P always works and so this
> > >     address can be DMA'd from.
> > 
> > So do I understand it correctly, that the driver then needs to set up
> > one device_private struct page and one pcie_p2p struct page for each
> > page of device memory participating in this way?
> 
> Yes, for now. I hope to remove the p2p page eventually.
> 
> > > If you are just talking about your private multi-path, then that is
> > > already handled..
> > 
> > No, the issue I'm having with this is really why would
> > hmm_range_fault() need the new pfn when it could easily be obtained
> > from the device-private pfn by the hmm_range_fault() caller? 
> 
> That isn't the API of HMM, the caller uses hmm to get PFNs it can use.
> 
> Deliberately returning PFNs the caller cannot use is nonsensical to
> it's purpose :)
> 
> > So anyway what we'll do is to try to use an interconnect-common owner
> > for now and revisit the problem if that's not sufficient so we can come
> > up with an acceptable solution.
> 
> That is the intention for sure. The idea was that the drivers under
> the private pages would somehow generate unique owners for shared
> private interconnect segments.
> 
> I wouldn't say this is the end all of the idea, if there are better
> ways to handle accepting private pages they can certainly be
> explored..
> 
> Jason

-- 
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux