Re: [Question] Are "device exclusive non-swap entries" / "SVM atomics in Nouveau" still getting used in practice?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 24, 2025 at 11:44:28AM +0100, David Hildenbrand wrote:
> On 23.01.25 16:08, Simona Vetter wrote:
> > On Thu, Jan 23, 2025 at 11:20:37AM +0100, David Hildenbrand wrote:
> > > Hi,
> > > 
> > > I keep finding issues in our implementation of "device exclusive non-swap
> > > entries", and the way it messes with mapcounts is disgusting.
> > > 
> > > As a reminder, what we do here is to replace a PTE pointing to an anonymous
> > > page by a "device exclusive non-swap entry".
> > > 
> > > As long as the original PTE is in place, the only CPU can access it, as soon
> > > as the "device exclusive non-swap entry" is in place, only the device can
> > > access it. Conversion back and forth is triggered by CPU / device faults.
> > > 
> > > I have fixes/reworks/simplifications for most things, but as there is only a
> > > "real" single user in-tree of make_device_exclusive():
> > > 
> > > 	drivers/gpu/drm/nouveau/nouveau_svm.c
> > > 
> > > to "support SVM atomics in Nouveau [1]"
> > > 
> > > naturally I am wondering: is this still a thing on actual hardware, or is it
> > > already stale on recent hardware and not really required anymore?
> > > 
> > > 
> > > [1] https://lore.kernel.org/linux-kernel//6621654.gmDyfcmpjF@nvdebian/T/
> > 
> 
> Thanks for your answer!
> 
> Nvidia folks told me on a different channel that it's still getting used.
> 
> > As long as you don't have a coherent interconnect it's needed. On intel
> > discrete device atomics require device memory, so they need full hmm
> > migration (and hence wont use this function even once we land intel gpu
> > svm code in upstream).
> 
> Makes sense.
> 
> > On integrated the gpu is tied into the coherency
> > fabric, so there it's not needed.
> > 
> > I think the more fundamental question with both this function here and
> > with forced migration to device memory is that there's no guarantee it
> > will work out.
> 
> Yes, in particular with device-exclusive, it doesn't really work with THP
> and is only limited to anonymous memory. I have patches to at least make it
> work reliably with THP.

I should have crawled through the implementation first before replying.
Since it only looks at folio_mapcount() make_device_exclusive() should at
least in theory work reliably on anon memory, and not be impacted by
elevated refcounts due to migration/ksm/thp/whatever. This is unlike
device atomics that require migration to device memory, which is just
fundamentally not a reliable thing.

> Then, we seem to give up too easily if we cannot lock the folio when wanting
> to convert to device-exclusive, which also looks rather odd. But well, maybe
> it just works good enough in the common case, or there is some other retry
> logic that makes it fly.

I've crawled through the path to migrate pages from device memory back to
system memory a few months ago, and found some livelock issues in there.
Wouldn't be surprised if m_d_e has some of the same, but I didn't dig
through it (least because intel can't use it because not-so-great hw
design).

> > At least that's my understanding. And for this gpu device
> > atomics without coherent interconnect idea to work, we'd need to be able
> > to guarantee that we can make any page device exclusive. So from my side I
> > have some pretty big question marks on this entire thing overall.
> 
> I don't think other memory (shmem/file/...) is really feasible as soon as
> other processes (not the current process) map/write/read file pages.

Yeah none of the apis that use this internally in their implementations
make any promises beyond memory acquired with libc's malloc() or one of
the variants. So this limitation is fine.

> We could really only handle if we converted a single PTE and that PTE is
> getting converted back again.
> 
> There are other concerns I have (what if the page is pinned and access
> outside of the user space page tables?). Maybe there was not need to handle
> these cases so far.

I think that's also ok, but might be good to document this clearly that
concurrent direct i/o or rdma registered buffer or whatever will mess with
this. The promise is only between the gpu and the cpu, not anything else,
in current apis. At least to my knowledge.

> So best I can do is make anonymous memory more reliable with
> device-exclusive and fixup some of the problematic parts that I see (e.g.,
> broken page reclaim, page migration, ...).
> 
> But before starting to cleanup+improve the existing handling of anonymous
> memory, I was wondering if this whole thing is getting used at all.

Yeah if this can be made reliable (under the limitation of only anon
memory and only excluding userspace access) then I expect we'll need this
for a very long time. I just had no idea whether even that is possible.

What isn't good is if it's only mostly reliable, like the current
pgmap->ops->migrate_to_ram() path in do_swap_page() still is. But that one
is fixable, the patches should be floating around somewhere.
-Sima
-- 
Simona Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux