Am 13.01.21 um 17:56 schrieb Jerome Glisse:
On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
This is the first version of our HMM based shared virtual memory manager
for KFD. There are still a number of known issues that we're working through
(see below). This will likely lead to some pretty significant changes in
MMU notifier handling and locking on the migration code paths. So don't
get hung up on those details yet.
But I think this is a good time to start getting feedback. We're pretty
confident about the ioctl API, which is both simple and extensible for the
future. (see patches 4,16) The user mode side of the API can be found here:
https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
I'd also like another pair of eyes on how we're interfacing with the GPU VM
code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
and some retry IRQ handling changes (32).
Known issues:
* won't work with IOMMU enabled, we need to dma_map all pages properly
* still working on some race conditions and random bugs
* performance is not great yet
Still catching up, but I think there's another one for your list:
* hmm gpu context preempt vs page fault handling. I've had a short
discussion about this one with Christian before the holidays, and also
some private chats with Jerome. It's nasty since no easy fix, much less
a good idea what's the best approach here.
Do you have a pointer to that discussion or any more details?
Essentially if you're handling an hmm page fault from the gpu, you can
deadlock by calling dma_fence_wait on a (chain of, possibly) other command
submissions or compute contexts with dma_fence_wait. Which deadlocks if
you can't preempt while you have that page fault pending. Two solutions:
- your hw can (at least for compute ctx) preempt even when a page fault is
pending
- lots of screaming in trying to come up with an alternate solution. They
all suck.
Note that the dma_fence_wait is hard requirement, because we need that for
mmu notifiers and shrinkers, disallowing that would disable dynamic memory
management. Which is the current "ttm is self-limited to 50% of system
memory" limitation Christian is trying to lift. So that's really not
a restriction we can lift, at least not in upstream where we need to also
support old style hardware which doesn't have page fault support and
really has no other option to handle memory management than
dma_fence_wait.
Thread was here:
https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@xxxxxxxxxxxxxx/
There's a few ways to resolve this (without having preempt-capable
hardware), but they're all supremely nasty.
-Daniel
I had a new idea, i wanted to think more about it but have not yet,
anyway here it is. Adding a new callback to dma fence which ask the
question can it dead lock ? Any time a GPU driver has pending page
fault (ie something calling into the mm) it answer yes, otherwise
no. The GPU shrinker would ask the question before waiting on any
dma-fence and back of if it gets yes. Shrinker can still try many
dma buf object for which it does not get a yes on associated fence.
This does not solve the mmu notifier case, for this you would just
invalidate the gem userptr object (with a flag but not releasing the
page refcount) but you would not wait for the GPU (ie no dma fence
wait in that code path anymore). The userptr API never really made
the contract that it will always be in sync with the mm view of the
world so if different page get remapped to same virtual address
while GPU is still working with the old pages it should not be an
issue (it would not be in our usage of userptr for compositor and
what not).
The current working idea in my mind goes into a similar direction.
But instead of a callback I'm adding a complete new class of HMM fences.
Waiting in the MMU notfier, scheduler, TTM etc etc is only allowed for
the dma_fences and HMM fences are ignored in container objects.
When you handle an implicit or explicit synchronization request from
userspace you need to block for HMM fences to complete before taking any
resource locks.
Regards,
Christian.
Maybe i overlook something there.
Cheers,
Jérôme
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel