On 02/01/2018 15:12, Chris Wilson wrote:
We have a hole in our busy-stat accounting if the pmu is enabled during
a long running batch, the pmu will not start accumulating busy-time
until the next context switch. This then fails tests that are only
sampling a single batch.
Oops, something in recent perf_pmu rework uncovered this?
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
---
drivers/gpu/drm/i915/intel_engine_cs.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index ebdcbcbacb3c..c2f01ff96ce1 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1946,8 +1946,15 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
spin_lock_irqsave(&engine->stats.lock, flags);
if (engine->stats.enabled == ~0)
goto busy;
- if (engine->stats.enabled++ == 0)
+ if (engine->stats.enabled++ == 0) {
engine->stats.enabled_at = ktime_get();
+
+ /* XXX submission method oblivious */
You mean once busy stats support for the GuC gets added?
+ engine->stats.active = port_count(&engine->execlists.port[1]);
+ engine->stats.active += port_count(&engine->execlists.port[0]);
Hm, wouldn't this have the potential to over-account the active counter
resulting in permanent 100%? Port count is [0, 2], where 2 is
re-submission of port 0, while engine->stats.active is [0, 2], where
this 2 is number of ports. In other words I think the correct method
would be:
if (port_count(0))
active++;
if (port_count(1))
active++;
Have to yet think how to handle GuC for this hole.
Regards,
Tvrtko
+ if (engine->stats.active)
+ engine->stats.start = engine->stats.enabled_at;
+ }
spin_unlock_irqrestore(&engine->stats.lock, flags);
return 0;
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx