Re: [Question] Are "device exclusive non-swap entries" / "SVM atomics in Nouveau" still getting used in practice?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]<

 



On 29.01.25 12:28, Simona Vetter wrote:
On Wed, Jan 29, 2025 at 11:48:03AM +0100, Simona Vetter wrote:
On Tue, Jan 28, 2025 at 09:24:33PM +0100, David Hildenbrand wrote:
On 28.01.25 21:14, Simona Vetter wrote:
On Tue, Jan 28, 2025 at 11:09:24AM +1100, Alistair Popple wrote:
On Fri, Jan 24, 2025 at 06:54:02PM +0100, David Hildenbrand wrote:
On integrated the gpu is tied into the coherency
fabric, so there it's not needed.

I think the more fundamental question with both this function here and
with forced migration to device memory is that there's no guarantee it
will work out.

Yes, in particular with device-exclusive, it doesn't really work with THP
and is only limited to anonymous memory. I have patches to at least make it
work reliably with THP.

I should have crawled through the implementation first before replying.
Since it only looks at folio_mapcount() make_device_exclusive() should at
least in theory work reliably on anon memory, and not be impacted by
elevated refcounts due to migration/ksm/thp/whatever.

Yes, there is -- in theory -- nothing blocking the conversion except the
folio lock. That's different than page migration.

Indeed - this was the entire motivation for make_device_exclusive() - that we
needed a way to reliably exclude CPU access that couldn't be blocked in the same
way page migration can (otherwise we could have just migrated to a device page,
even if that may have added unwanted overhead).

The folio_trylock worries me a bit. I guess this is to avoid deadlocks
when locking multiple folios, but I think at least on the first one we
need an unconditional folio_lock to guarantee forward progress.

At least on the hmm path I was able to trigger the EBUSY a couple of times
due to concurrent swapout. But the hmm-tests selftest fails immediately
instead of retrying.

My worries with just retrying is that it's very hard to assess whether
there's a livelock or whether the retry has a good chance of success. As
an example the ->migrate_to_ram path has some trylocks, and the window
where all other threads got halfway and then fail the trylock is big
enough that once you pile up enough threads that spin through there,
you're stuck forever. Which isn't great.

So if we could convert at least the first folio_trylock into a plain lock
then forward progress is obviously assured and there's no need to crawl
through large chunks of mm/ code to hunt for corner cases where we could
be too unlucky to ever win the race.

Since
atomics can't cross 4k boundaries (or the hw is just really broken) this
should be enough to avoid being stuck in a livelock. I'm also not seeing
any other reason why a folio_lock shouldn't work here, but then my
understanding of mm/ stuff is really just scratching the surface.

I did crawl through all the other code and it looks like everything else
is unconditional locks. So looks all good and I didn't spot anything else
that seemed problematic.

Somewhat aside, I do wonder whether we really want to require callers to
hold the mmap lock, or whether with all the work towards lockless fastpath
that shouldn't instead just be an implementation detail.

We might be able to use the VMA lock in the future, but that will require
GUP support and a bunch more. Until then, the mm_lock in read mode is
required.

Yup. I also don't think we should try to improve before benchmarks show an
actual need. It's more about future proofing and making sure mmap_lock
doesn't leak into driver data structures that I'm worried about. Because
I've seen some hmm/gpu rfc patches that heavily relied on mmap_lock to
keep everything correct on the driver side, which is not a clean design.

I was not able to convince myself that we'll really need the folio lock, but
that's also a separate discussion.

This is way above my pay understanding of mm/ unfortunately.

I pondered this some more, and I think it's to make sure we get a stable
reading of folio_mapcount() and are not racing with new rmaps being
established. But I also got lost a few times in the maze ...

That mapcount handling is all messed up and I'll remove that along with
the rmap walk. Also, the folio lock does not stabilize the mapcount at all ...

Here is my understanding:

commit e2dca6db09186534c7e6082b77be6e17d8920f10
Author: David Hildenbrand <david@xxxxxxxxxx>
Date:   Tue Jan 28 15:25:37 2025 +0100

    mm/memory: document restore_exclusive_pte()
Let's document how this function is to be used, and why the requirement
    for the folio lock might maybe be dropped in the future.
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>

diff --git a/mm/memory.c b/mm/memory.c
index 46956994aaff..caaae8df11a9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -718,6 +718,31 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma,
 }
 #endif
+/**
+ * restore_exclusive_pte - Restore a device-exclusive entry
+ * @vma: VMA covering @address
+ * @folio: the mapped folio
+ * @page: the mapped folio page
+ * @address: the virtual address
+ * @ptep: PTE pointer into the locked page table mapping the folio page
+ * @orig_pte: PTE value at @ptep
+ *
+ * Restore a device-exclusive non-swap entry to an ordinary present PTE.
+ *
+ * The folio and the page table must be locked, and MMU notifiers must have
+ * been called to invalidate any (exclusive) device mappings. In case of
+ * fork(), MMU_NOTIFY_PROTECTION_PAGE is triggered, and in case of a page
+ * fault MMU_NOTIFY_EXCLUSIVE is triggered.
+ *
+ * Locking the folio makes sure that anybody who just converted the PTE to
+ * a device-private entry can map it into the device, before unlocking it; so
+ * the folio lock prevents concurrent conversion to device-exclusive.
+ *
+ * TODO: the folio lock does not protect against all cases of concurrent
+ * page table modifications (e.g., MADV_DONTNEED, mprotect), so device drivers
+ * must already use MMU notifiers to sync against any concurrent changes
+ * Maybe the requirement for the folio lock can be dropped in the future.
+ */


--
Cheers,

David / dhildenb




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux