Re: [RFC 00/17] Per-context and per-client engine busyness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26/10/17 08:34, Tvrtko Ursulin wrote:
On 25/10/2017 18:38, Chris Wilson wrote:
Quoting Chris Wilson (2017-10-25 16:47:13)
Quoting Tvrtko Ursulin (2017-10-25 16:36:15)
From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
I've prototyped a quick demo of intel-client-top which produces output like:

      neverball[  6011]:  rcs0:  41.01%  bcs0:   0.00%  vcs0:   0.00%  vecs0:   0.00%
           Xorg[  5664]:  rcs0:  31.16%  bcs0:   0.00%  vcs0:   0.00%  vecs0:   0.00%
          xfwm4[  5727]:  rcs0:   0.00%  bcs0:   0.00%  vcs0:   0.00%  vecs0:   0.00%
+1
+2 for a graph ;)
Where are those placement students when you need them! :)

I won't be your student, but I could like to wire this into gputop.


Another potential use for the per-client infrastructure is tieing it up with
perf PMU. At the moment our perf PMU are global counters only. With the per-
client infrastructure it should be possible to make it work in the task mode as
well and so enable GPU busyness profiling of single tasks.
ctx->pid can be misleading, as it set on creation, but the context can
be transferred over fd to the real client. (Typically that applies to
the default context, 0.)
Ok, I see that you update the pid when a new context is created. Still
have the likes of libva that may use DRI3 without creating a context
itself.
Hm, how rude of the protocol to provide this anonymization service!

I guess I could update on submission as well and then there is no escape.
Back to the general niggle; I really would like to avoid adding custom
i915 interfaces for this, that should be a last resort if we can find no
way through e.g. perf.
I certainly plan to investigate adding pid filtering to the PMU. It is
supposed to be possible but I haven't tried it yet. Also not sure if it
will be exactly suitable for a top like tool. Will see if I manage to have
it working.

But what do you say about the simple per-context API (patch 13)? Do you
find using ctx get param for this acceptable or you can think of a different
way?

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux