On 18/02/2022 21:33, John.C.Harrison@xxxxxxxxx wrote:
From: John Harrison <John.C.Harrison@xxxxxxxxx>
Compute workloads are inherantly not pre-emptible for long periods on
current hardware. As a workaround for this, the pre-emption timeout
for compute capable engines was disabled. This is undesirable with GuC
submission as it prevents per engine reset of hung contexts. Hence the
next patch will re-enable the timeout but bumped up by an order of
magnititude.
(Some typos above.)
However, the heartbeat might not respect that. Depending upon current
activity, a pre-emption to the heartbeat pulse might not even be
attempted until the last heartbeat period. Which means that only one
Might not be attempted, but could be if something is running with lower
priority. In which case I think special casing the last heartbeat does
not feel right because it can end up resetting the engine before it was
intended.
Like if first heartbeat decides to preempt (the decision is backend
specific, could be same prio + timeslicing), and preempt timeout has
been set to heartbeat interval * 3, then 2nd heartbeat gets queued up,
then 3rd, and so reset is triggered even before the first preempt
timeout legitimately expires (or just as it is about to react).
Instead, how about preempt timeout is always considered when calculating
when to emit the next heartbeat? End result would be similar to your
patch, in terms of avoiding the direct problem, although hang detection
would be overall longer (but more correct I think).
And it also means in the next patch you don't have to add coupling
between preempt timeout and heartbeat to intel_engine_setup. Instead
just some long preempt timeout would be needed. Granted, the decoupling
argument is not super strong since then the heartbeat code has the
coupling instead, but that still feels better to me. (Since we can say
heartbeats only make sense on loaded engines, and so things like preempt
timeout can legitimately be considered from there.)
Incidentally, that would be similar to a patch which Chris had a year
ago (https://patchwork.freedesktop.org/patch/419783/?series=86841&rev=1)
to fix some CI issue.
On a related topic, if GuC engine resets stop working when preempt
timeout is set to zero - I think we need to somehow let the user know if
they try to tweak it via sysfs. Perhaps go as far as -EINVAL in GuC
mode, if i915.reset has not explicitly disabled engine resets.
Regards,
Tvrtko
period is granted for the pre-emption to occur. With the aforesaid
bump, the pre-emption timeout could be significantly larger than this
heartbeat period.
So adjust the heartbeat code to take the pre-emption timeout into
account. When it reaches the final (high priority) period, it now
ensures the delay before hitting reset is bigger than the pre-emption
timeout.
Signed-off-by: John Harrison <John.C.Harrison@xxxxxxxxx>
---
drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
index a3698f611f45..72a82a6085e0 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
@@ -22,9 +22,25 @@
static bool next_heartbeat(struct intel_engine_cs *engine)
{
+ struct i915_request *rq;
long delay;
delay = READ_ONCE(engine->props.heartbeat_interval_ms);
+
+ rq = engine->heartbeat.systole;
+ if (rq && rq->sched.attr.priority >= I915_PRIORITY_BARRIER) {
+ long longer;
+
+ /*
+ * The final try is at the highest priority possible. Up until now
+ * a pre-emption might not even have been attempted. So make sure
+ * this last attempt allows enough time for a pre-emption to occur.
+ */
+ longer = READ_ONCE(engine->props.preempt_timeout_ms) * 2;
+ if (longer > delay)
+ delay = longer;
+ }
+
if (!delay)
return false;