On 7/21/20 11:50 AM, Daniel Vetter wrote:
On Tue, Jul 21, 2020 at 11:38 AM Thomas Hellström (Intel)
<thomas_os@xxxxxxxxxxxx> wrote:
On 7/21/20 10:55 AM, Christian König wrote:
Am 21.07.20 um 10:47 schrieb Thomas Hellström (Intel):
On 7/21/20 9:45 AM, Christian König wrote:
Am 21.07.20 um 09:41 schrieb Daniel Vetter:
On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
wrote:
Hi,
On 7/9/20 2:33 PM, Daniel Vetter wrote:
Comes up every few years, gets somewhat tedious to discuss, let's
write this down once and for all.
What I'm not sure about is whether the text should be more
explicit in
flat out mandating the amdkfd eviction fences for long running
compute
workloads or workloads where userspace fencing is allowed.
Although (in my humble opinion) it might be possible to completely
untangle
kernel-introduced fences for resource management and dma-fences
used for
completion- and dependency tracking and lift a lot of restrictions
for the
dma-fences, including prohibiting infinite ones, I think this
makes sense
describing the current state.
Yeah I think a future patch needs to type up how we want to make that
happen (for some cross driver consistency) and what needs to be
considered. Some of the necessary parts are already there (with
like the
preemption fences amdkfd has as an example), but I think some clear
docs
on what's required from both hw, drivers and userspace would be really
good.
I'm currently writing that up, but probably still need a few days
for this.
Great! I put down some (very) initial thoughts a couple of weeks ago
building on eviction fences for various hardware complexity levels here:
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fthomash%2Fdocs%2F-%2Fblob%2Fmaster%2FUntangling%2520dma-fence%2520and%2520memory%2520allocation.odt&data=02%7C01%7Cchristian.koenig%40amd.com%7C8978bbd7823e4b41663708d82d52add3%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637309180424312390&sdata=tTxx2vfzfwLM1IBJSqqAZRw1604R%2F0bI3MwN1%2FBf2VQ%3D&reserved=0
I don't think that this will ever be possible.
See that Daniel describes in his text is that indefinite fences are a
bad idea for memory management, and I think that this is a fixed fact.
In other words the whole concept of submitting work to the kernel
which depends on some user space interaction doesn't work and never will.
Well the idea here is that memory management will *never* depend on
indefinite fences: As soon as someone waits on a memory manager fence
(be it eviction, shrinker or mmu notifier) it breaks out of any
dma-fence dependencies and /or user-space interaction. The text tries to
describe what's required to be able to do that (save for non-preemptible
gpus where someone submits a forever-running shader).
Yeah I think that part of your text is good to describe how to
untangle memory fences from synchronization fences given how much the
hw can do.
So while I think this is possible (until someone comes up with a case
where it wouldn't work of course), I guess Daniel has a point in that it
won't happen because of inertia and there might be better options.
Yeah it's just I don't see much chance for splitting dma-fence itself.
That's also why I'm not positive on the "no hw preemption, only
scheduler" case: You still have a dma_fence for the batch itself,
which means still no userspace controlled synchronization or other
form of indefinite batches allowed. So not getting us any closer to
enabling the compute use cases people want.
Yes, we can't do magic. As soon as an indefinite batch makes it to such
hardware we've lost. But since we can break out while the batch is stuck
in the scheduler waiting, what I believe we *can* do with this approach
is to avoid deadlocks due to locally unknown dependencies, which has
some bearing on this documentation patch, and also to allow memory
allocation in dma-fence (not memory-fence) critical sections, like gpu
fault- and error handlers without resorting to using memory pools.
But again. I'm not saying we should actually implement this. Better to
consider it and reject it than not consider it at all.
/Thomas
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel