On Wed, Sep 04, 2024 at 02:27:15PM +0200, Thomas Hellström wrote: > Hi, Sima, > > On Mon, 2024-09-02 at 14:33 +0200, Daniel Vetter wrote: > > Jumping in here in the middle, since I think it's a solid place to > > drop my > > idea of "align with core mm" gpusvm locking ... > > > > On Thu, Aug 29, 2024 at 08:56:23PM +0000, Matthew Brost wrote: > > > On Thu, Aug 29, 2024 at 09:18:29PM +0200, Thomas Hellström wrote: > > > Issues with removing a SVM range: > > > > > > - Xe bind code stores invalidation / present state in VMA, this > > > would > > > need to be moved to the radix tree. I have Jira open for that > > > work > > > which I believe other developers are going to own. > > > - Where would the dma mapping / device pages be stored? > > > - In the radix tree? What if ATS is enabled? We don't have > > > a > > > driver owned radix tree. How do we reasonably connect a > > > driver > > > owned radix to a common GPUSVM layer? > > > > Yeah this one is really annoying, because the core mm gets away with > > nothing because it can just store the pfn in the pte. And it doesn't > > need > > anything else. So we probably still need something unfortuantely ... > > > > > - In the notifier? What is the notifier is sparsely > > > populated? > > > We would be wasting huge amounts of memory. What is the > > > notifier is configured to span the entire virtual > > > address > > > space? > > > > So if we go with the radix idea, we could model the radix to exactly > > match > > the gpu pagetables. That's essentially what the core mm does. Then > > each > > pagetable at each level has a spinlock for essentially a range lock. > > notifier seqno would be stored into each pagetable (not the > > endividual > > entries, that's probably too much), which should allow us to very > > effeciently check whether an entire arbitrary va range is still valid > > on > > the fault side. > > I still wonder wether this should be owned by the driver, though. And > if we were optimizing for multiple simultaneous fault processing with a > small granularity, I would agree, but given that gpu pagefaults are > considered so slow they should be avoided, I wonder whether xe's > current approach of a single page-table lock wouldn't suffice, in > addition to a semi-global seqno? > > For invalidations, I think we actually currently allow simultaneous > overlapping invalidations that are only protected by the write-side of > the notifier seqno. Yeah I think this is just a long-term design point: As long as the pagetable locking is conceptually a range thing I agree it doesn't matter what we start out with, as long as it's somewhere on the line between a global lock and the over-the-top scalable radix tree per-pagetable node approach core mm has. > > On the notifier side we can also very efficiently walk arbitrary > > ranges, > > because the locking is really fine-grained and in an adaptive way. > > > > > - How does the garbage collector work? We can't allocate memory in > > > the > > > notifier so we don't anything to add to the garbage collector. We > > > can't directly modify page tables given you need lock in the path > > > of > > > reclaim. > > > > Probably no more garbage collector, you deal with pages/folios like > > the > > core mm expects. > > Yeah, if the page-table locks are reclaim-safe no more garbage > collector, but OTOH, IIRC even in core-mm, the invalidation > counterpart, unmap_mapping_range() can't and doesn't remove page-table > subtrees when called from the address-space side, whereas zapping when > called from the mm side, like madvise(WONTNEED), can. Yeah we might need to mark up entirely empty pagetables and pass that up the radix tree, so that on the next gpu bind we can zap those if needed. Since we have the pagetables already it should be doable to add them to a "needs garbage collecting" list of some sorts for entirely empty pagetables, unlike the garbage collector that tosses out partial ranges and so needs more stuff. But also, future problem for post-merge I think. -Sima -- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch