Hi Daniel,
could you help my explaining to Christoph why this doesn't work?
We have exercised this multiple times in the past month and I'm really
surprised that anybody is still trying this approach.
Thanks,
Christian.
Am 26.06.20 um 10:54 schrieb Christian König:
Am 26.06.20 um 10:10 schrieb Chris Wilson:
Quoting Chris Wilson (2020-06-25 18:42:41)
Quoting Christian König (2020-06-25 16:47:09)
Am 25.06.20 um 17:10 schrieb Chris Wilson:
We have the DAG of fences, we can use that information to avoid
adding
an implicit coupling between execution contexts.
No, we can't. And it sounds like you still have not understood the
underlying problem.
See this has nothing to do with the fences itself or their DAG.
When you depend on userspace to do another submission so your fence
can
start processing you end up depending on whatever userspace does.
HW dependency on userspace is explicit in the ABI and client APIs, and
the direct control userspace has over the HW.
This in turn means when userspace calls a system call (or does page
fault) it is possible that this ends up in the reclaim code path.
We have both said the very same thing.
Then I'm really wondering why you don't come to the same conclusion :)
And while we want to avoid it both Daniel and I already discussed this
multiple times and we agree it is still a must have to be able to do
fence waits in the reclaim code path.
But came to the opposite conclusion. For doing that wait harms the
unrelated caller and the reclaim is opportunistic. There is no need for
that caller to reclaim that page, when it can have any other. Why
did you
even choose that page to reclaim? Inducing latency in the caller is
a bug,
has been reported previously as a bug, and still considered a bug.
[But at
the end of the day, if the system is out of memory, then you have to
pick
a victim.]
Correct. But this is also not limited to the reclaim path as any
kernel system call and page fault can cause a problem as well.
In other words "fence -> userspace -> page fault -> fence" or "fence
-> userspace -> system call -> fence" can easily cause the same
problem and that is not avoidable.
An example
Thread A Thread B
submit(VkCmdWaitEvents)
recvfrom(ThreadB) ... sendto(ThreadB)
\- alloc_page
\- direct reclaim
\- dma_fence_wait(A)
VkSetEvent()
Regardless of that actual deadlock, waiting on an arbitrary fence incurs
an unbounded latency which is unacceptable for direct reclaim.
Online debugging can indefinitely suspend fence signaling, and the only
guarantee we make of forward progress, in some cases, is process
termination.
And exactly that is what doesn't work. You don't have any forward
progress any more because you ran into a software deadlock.
In other words the signaling of a fence depends on the welfare of
userspace. You can try to kill userspace, but this can wait for the
fence you try to signal in the first place.
See the difference to a deadlock on the GPU is that you can can always
kill a running job or process even if it is stuck with something else.
But if the kernel is deadlocked with itself you can't kill the process
any more, the only option left to get cleanly out of this is to reboot
the kernel.
The only way to avoid this would be to never ever wait for the fence
in the kernel and then your whole construct is not useful any more.
I'm running out of ideas how to explain what the problem is here....
Regards,
Christian.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx