Re: [RFC 1/5] drm/i915: Track per-context engine busyness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2018-02-15 09:29:57)
> 
> On 14/02/2018 19:07, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2018-02-14 18:50:31)
> >> +ktime_t intel_context_engine_get_busy_time(struct i915_gem_context *ctx,
> >> +                                          struct intel_engine_cs *engine)
> >> +{
> >> +       struct intel_context *ce = &ctx->engine[engine->id];
> >> +       ktime_t total;
> >> +
> >> +       spin_lock_irq(&ce->stats.lock);
> >> +
> >> +       total = ce->stats.total;
> >> +
> >> +       if (ce->stats.active)
> >> +               total = ktime_add(total,
> >> +                                 ktime_sub(ktime_get(), ce->stats.start));
> >> +
> >> +       spin_unlock_irq(&ce->stats.lock);
> > 
> > Looks like we can just use a seqlock here.
> 
> Hm, you may have suggested this before? Even for whole engine stats.
> 
> I think it could yes, with the benefit of not delaying writers (execlist 
> processing) in presence of readers. But since the code is so writer 
> heavy, and readers so infrequent and light weight, I wouldn't think we 
> are in any danger of writer starvation, or even injecting any relevant 
> latencies into command submission.
> 
> Also, could we get into reader live-lock situations under heavy 
> interrupts? Probably not, since the reader section is again so light 
> weight compared to the rest code would have to do to trigger it.

In that scenario, I heavily favour the writer. They are responding to
the interrupt and latency in the writer means a potential GPU stall.

irq -> submission latency being the bane of my existence atm.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux