On 2/23/2022 05:58, Tvrtko Ursulin wrote:
On 23/02/2022 02:45, John Harrison wrote:
On 2/22/2022 03:19, Tvrtko Ursulin wrote:
On 18/02/2022 21:33, John.C.Harrison@xxxxxxxxx wrote:
From: John Harrison <John.C.Harrison@xxxxxxxxx>
Compute workloads are inherantly not pre-emptible for long periods on
current hardware. As a workaround for this, the pre-emption timeout
for compute capable engines was disabled. This is undesirable with GuC
submission as it prevents per engine reset of hung contexts. Hence the
next patch will re-enable the timeout but bumped up by an order of
magnititude.
(Some typos above.)
I'm spotting 'inherently' but not anything else.
Magnititude! O;)
Doh!
[snip]
Whereas, bumping all heartbeat periods to be greater than the
pre-emption timeout is wasteful and unnecessary. That leads to a
total heartbeat time of about a minute. Which is a very long time to
wait for a hang to be detected and recovered. Especially when the
official limit on a context responding to an 'are you dead' query is
only 7.5 seconds.
Not sure how did you get one minute?
7.5 * 2 (to be safe) = 15. 15 * 5 (number of heartbeat periods) = 75 =>
1 minute 15 seconds
Even ignoring any safety factor and just going with 7.5 * 5 still gets
you to 37.5 seconds which is over a half a minute and likely to race.
Regardless, crux of argument was to avoid GuC engine reset and
heartbeat reset racing with each other, and to do that by considering
the preempt timeout with the heartbeat interval. I was thinking about
this scenario in this series:
[Please use fixed width font and no line wrap to view.]
A)
tP = preempt timeout
tH = hearbeat interval
tP = 3 * tH
1) Background load = I915_PRIORITY_DISPLAY
<-- [tH] --> Pulse1 <-- [tH] --> Pulse2 <-- [tH] --> Pulse3 <---- [2 *
tH] ----> FULL RESET
|
\- preemption
triggered, tP = 3 * tH ------\
\-> preempt timeout would hit here
Here we have collateral damage due full reset, since we can't tell GuC
to reset just one engine and we fudged tP just to "account" for
heartbeats.
You are missing the whole point of the patch series which is that the
last heartbeat period is '2 * tP' not '2 * tH'.
+ longer = READ_ONCE(engine->props.preempt_timeout_ms) * 2;
By making the last period double the pre-emption timeout, it is
guaranteed that the FULL RESET stage cannot be hit before the hardware
has attempted and timed-out on at least one pre-emption.
[snip]
<-- [tH] --> Pulse1 <-- [tH] --> Pulse2 <-- [tH] --> Pulse3 <---- [2 *
tH] ----> full reset would be here
|
\- preemption triggered, tP = 3 * tH ----------------\
\-> Preempt timeout reset
Here is is kind of least worse, but question is why we fudged tP when
it gives us nothing good in this case.
The point of fudging tP(RCS) is to give compute workloads longer to
reach a pre-emptible point (given that EU walkers are basically not
pre-emptible). The reason for doing the fudge is not connected to the
heartbeat at all. The fact that it causes problems for the heartbeat is
an undesired side effect.
Note that the use of 'tP(RCS) = tH * 3' was just an arbitrary
calculation that gave us something that all interested parties were
vaguely happy with. It could just as easily be a fixed, hard coded value
of 7.5s but having it based on something configurable seemed more
sensible. The other option was 'tP(RCS) = tP * 12' but that felt more
arbitrary than basing it on the average heartbeat timeout. As in, three
heartbeat periods is about what a normal prio task gets before it gets
pre-empted by the heartbeat. So using that for general purpose
pre-emptions (e.g. time slicing between multiple user apps) seems
reasonable.
B)
Instead, my idea to account for preempt timeout when calculating when
to schedule next hearbeat would look like this:
First of all tP can be left at a large value unrelated to tH. Lets say
tP = 640ms. tH stays 2.5s.
640ms is not 'large'. The requirement is either zero (disabled) or
region of 7.5s. The 640ms figure is the default for non-compute engines.
Anything that can run EUs needs to be 'huge'.
1) Background load = I915_PRIORITY_DISPLAY
<-- [tH + tP] --> Pulse1 <-- [tH + tP] --> Pulse2 <-- [tH + tP] -->
Pulse3 <-- [tH + tP] --> full reset would be here
Sure, this works but each period is now 2.5 + 7.5 = 10s. The full five
periods is therefore 50s, which is practically a minute.
[snip]
Am I missing some requirement or you see another problem with this idea?
On a related topic, if GuC engine resets stop working when preempt
timeout is set to zero - I think we need to somehow let the user
know if they try to tweak it via sysfs. Perhaps go as far as -EINVAL
in GuC mode, if i915.reset has not explicitly disabled engine resets.
Define 'stops working'. The definition of the sysfs interface is that
a value of zero disables pre-emption. If you don't have pre-emption
and your hang detection mechanism relies on pre-emption then you
don't have a hang detection mechanism either. If the user really
wants to allow
By stops working I meant that it stops working. :)
With execlist one can disable preempt timeout and "stopped heartbeat"
can still reset the stuck engine and so avoid collateral damage. With
GuC it appears this is not possible. So I was thinking this is
something worthy a log notice.
their context to run forever and never be pre-empted then that means
they also don't want it to be reset arbitrarily. Which means they
would also be disabling the heartbeat timer as well. Indeed, this is
what we
I don't think so. Preempt timeout is disabled already on TGL/RCS
upstream but hearbeat is not and so hangcheck still works.
The pre-emption disable in upstream is not a valid solution for compute
customers. It is a worst-of-all-worlds hack for general usage. As noted
already, any actual compute specific customer is advised to disable all
forms of reset and do their hang detection manually. A slightly less
worse hack for customers that are not actually running long compute
workloads (i.e. the vast majority of end users) is to just use a long
pre-emption timeout.
advise compute customers to do. It is then up to the user themselves
to spot a hang and to manually kill (Ctrl+C, kill ###, etc.) their
task. Killing the CPU task will automatically clear up any GPU
resources allocated to that task (excepting context persistence,
which is a) broken and b) something we also tell compute customers to
disable).
What is broken with context persistence? I noticed one patch claiming
to be fixing something in that area which looked suspect. Has it been
established no userspace relies on it?
One major issue is that it has hooks into the execlist scheduler
backend. I forget the exact details right now. The implementation as a
whole is incredibly complex and convoluted :(. But there's stuff about
what happens when you disable the heartbeat after having closed a
persistence context's handle (and thus made it persisting). There's also
things like it sends a super high priority heartbeat pulse at the point
of becoming persisting. That plays havoc for platforms with dependent
engines and/or compute workloads. A context becomes persisting on RCS
and results in your unrealted CCS work being reset. It's a mess.
The comment from Daniel Vetter is that persistence should have no
connection to the heartbeat at all. All of that dynamic behaviour and
complexity should just be removed.
Persistence itself can stay. There are valid UMD use cases. It is just
massively over complicated and doesn't work in all corner cases when not
using execlist submission or on newer platforms. The simplification that
is planned is to allow contexts to persist until the associated DRM
master handle is closed. At that point, all contexts associated with
that DRM handle are killed. That is what AMD and others apparently
implement.
John.
Regards,
Tvrtko