Am 14.01.21 um 06:34 schrieb Felix Kuehling:
Am 2021-01-11 um 11:29 a.m. schrieb Daniel Vetter:
On Fri, Jan 08, 2021 at 12:56:24PM -0500, Felix Kuehling wrote:
Am 2021-01-08 um 11:53 a.m. schrieb Daniel Vetter:
On Fri, Jan 8, 2021 at 5:36 PM Felix Kuehling <felix.kuehling@xxxxxxx> wrote:
Am 2021-01-08 um 11:06 a.m. schrieb Daniel Vetter:
On Fri, Jan 8, 2021 at 4:58 PM Felix Kuehling <felix.kuehling@xxxxxxx> wrote:
Am 2021-01-08 um 9:40 a.m. schrieb Daniel Vetter:
On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
This is the first version of our HMM based shared virtual memory manager
for KFD. There are still a number of known issues that we're working through
(see below). This will likely lead to some pretty significant changes in
MMU notifier handling and locking on the migration code paths. So don't
get hung up on those details yet.
But I think this is a good time to start getting feedback. We're pretty
confident about the ioctl API, which is both simple and extensible for the
future. (see patches 4,16) The user mode side of the API can be found here:
https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
I'd also like another pair of eyes on how we're interfacing with the GPU VM
code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
and some retry IRQ handling changes (32).
Known issues:
* won't work with IOMMU enabled, we need to dma_map all pages properly
* still working on some race conditions and random bugs
* performance is not great yet
Still catching up, but I think there's another one for your list:
* hmm gpu context preempt vs page fault handling. I've had a short
discussion about this one with Christian before the holidays, and also
some private chats with Jerome. It's nasty since no easy fix, much less
a good idea what's the best approach here.
Do you have a pointer to that discussion or any more details?
Essentially if you're handling an hmm page fault from the gpu, you can
deadlock by calling dma_fence_wait on a (chain of, possibly) other command
submissions or compute contexts with dma_fence_wait. Which deadlocks if
you can't preempt while you have that page fault pending. Two solutions:
- your hw can (at least for compute ctx) preempt even when a page fault is
pending
Our GFXv9 GPUs can do this. GFXv10 cannot.
Uh, why did your hw guys drop this :-/
Performance. It's the same reason why the XNACK mode selection API
exists (patch 16). When we enable recoverable page fault handling in the
compute units on GFXv9, it costs some performance even when no page
faults are happening. On GFXv10 that retry fault handling moved out of
the compute units, so they don't take the performance hit. But that
sacrificed the ability to preempt during page faults. We'll need to work
with our hardware teams to restore that capability in a future generation.
Ah yes, you need to stall in more points in the compute cores to make sure
you can recover if the page fault gets interrupted.
Maybe my knowledge is outdated, but my understanding is that nvidia can
also preempt (but only for compute jobs, since oh dear the pain this would
be for all the fixed function stuff). Since gfx10 moved page fault
handling further away from compute cores, do you know whether this now
means you can do page faults for (some?) fixed function stuff too? Or
still only for compute?
I'm not sure.
Supporting page fault for 3d would be real pain with the corner we're
stuck in right now, but better we know about this early than later :-/
I know Christian hates the idea.
Well I don't hate the idea. I just don't think that this will ever work
correctly and performant.
A big part of the additional fun is that we currently have a mix of HMM
capable engines (3D, compute, DMA) and not HMM capable engines (display,
multimedia etc..).
We know that page faults on GPUs can be
a huge performance drain because you're stalling potentially so many
threads and the CPU can become a bottle neck dealing with all the page
faults from many GPU threads. On the compute side, applications will be
optimized to avoid them as much as possible, e.g. by pre-faulting or
pre-fetching data before it's needed.
But I think you need page faults to make overcommitted memory with user
mode command submission not suck.
Yeah, completely agree.
The only short term alternative I see is to have an IOCTL telling the
kernel which memory is currently in use. And that is complete nonsense
cause it kills the advantage why we want user mode command submission in
the first place.
Regards,
Christian.
I do think it can be rescued with what I call gang scheduling of
engines: I.e. when a given engine is running a context (or a group of
engines, depending how your hw works) that can cause a page fault, you
must flush out all workloads running on the same engine which could
block a dma_fence (preempt them, or for non-compute stuff, force their
completion). And the other way round, i.e. before you can run a legacy
gl workload with a dma_fence on these engines you need to preempt all
ctxs that could cause page faults and take them at least out of the hw
scheduler queue.
Yuck! But yeah, that would work. A less invasive alternative would be to
reserve some compute units for graphics contexts so we can guarantee
forward progress for graphics contexts even when all CUs working on
compute stuff are stuck on page faults.
Won't this hurt compute workloads? I think we need something were at
least pure compute or pure gl/vk workloads run at full performance.
And without preempt we can't take anything back when we need it, so
would have to always upfront reserve some cores just in case.
Yes, it would hurt proportionally to how many CUs get reserved. On big
GPUs with many CUs the impact could be quite small.
Also, we could do the reservation only for the time when there's actually
a legacy context with normal dma_fence in the scheduler queue. Assuming
that reserving/unreserving of CUs isn't too expensive operation. If it's
as expensive as a full stall probably not worth the complexity here and
just go with a full stall and only run one or the other at a time.
Wrt desktops I'm also somewhat worried that we might end up killing
desktop workloads if there's not enough CUs reserved for these and they
end up taking too long and anger either tdr or worse the user because the
desktop is unuseable when you start a compute job and get a big pile of
faults. Probably needs some testing to see how bad it is.
That said, I'm not sure it'll work on our hardware. Our CUs can execute
multiple wavefronts from different contexts and switch between them with
fine granularity. I'd need to check with our HW engineers whether this
CU-internal context switching is still possible during page faults on
GFXv10.
You'd need to do the reservation for all contexts/engines which can cause
page faults, otherewise it'd leak.
All engines that can page fault and cannot be preempted during faults.
Regards,
Felix
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel