On 10/05/2022 12:26, Christian König wrote:
Am 10.05.22 um 12:50 schrieb Tvrtko Ursulin:
Hi,
On 10/05/2022 09:48, Christian König wrote:
Hi Tvrtko,
Am 10.05.22 um 10:23 schrieb Tvrtko Ursulin:
From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Convert fdinfo format to one documented in drm-usage-stats.rst.
Opens/TODO:
* Does someone from AMD want to take over this patch?
(I have no access to amdgpu hardware so won't be able to test
any hypothetical gputop work.)
I can give that a try as soon as it is completed.
And how to motivate someone on your side to pick up the amdgpu work? :)
Well if we could get more of my TTM/DRM patches reviewed I could have
same free time to do this :)
Yeah, we have a bunch of folks dedicatedly working on TTM and scheduling
refactoring so I am hoping they will notice and get involved.
* What are the semantics of AMD engine utilisation reported in
percents?
To be honest I haven't understood why we are using percents here
either, that is not something the kernel should mess with.
* Can it align with what i915 does (same what msm will do) or need
to document the alternative in the specification document? Both
option are workable with instantaneous percent only needing
support
to be added to vendor agnostic gputop.
I would prefer to just change to the ns format i915 and msm will be
using, that makes much more sense from my experience.
As far as I know we haven't released any publicly available userspace
using the existing AMD specific format. So that should still be
possible.
If amdgpu could export accumulated time context spent on engines that
would indeed be perfect. It would make the gputop I sketched out most
probably just work, as it did for Rob on msm.
In which case, apart from the admgpu work, it would just be a matter
of me tidying that tool a bit and re-sending out for review.
Could you push this to some repository on fdo and send me a link? Going
to pick up this patch here and give it a try, shouldn't be more than a
day of work.
Done, https://cgit.freedesktop.org/~tursulin/intel-gpu-tools/log/?h=gputop.
For extra reference the msm patch was this:
https://lore.kernel.org/lkml/20220225202614.225197-3-robdclark@xxxxxxxxx/
If you can expose the same fields gputtop should work.
* Can amdgpu expose drm-client-id? Without it gputop will not work.
How is that determined on i915 ? Does struct drm_file has that
somewhere?
It should correspond 1:1 with drm_file, since the purpose is to enable
gputop distinguish between unique open file descriptors (aka clients).
Ah! We do have a 64bit counter for that already because of technical needs.
In theory it could be just a hash value of a struct drm_file pointer
but that could confuse userspace if the struct gets reused within a
single userspace sampling period.
Because of that I track it in i915 separately since I wanted to have
an incrementing cyclic property to it - so that when a fd is closed
and new opened there is no practical chance they would have the same
drm-client-id.
* drm-engine-capacity - does the concept translate etc.
I don't think we are going to need that.
Okay, that one is optional for cases when there is more than one
engine of a type/class shown under a single entry in fdinfo. So when
gputop translates accumulated time into percentages it can do the
right thing. Code can already handle it not being present and assume one.
Yeah, we have that case for a couple of things. The GFX, SDMA and
multimedia engines all have different queues which needs to be accounted
together as far as I can see.
E.g. we have video decode and video encode as two separate rings, but
essentially they use the same engine.
Need to think about how to represent that.
I think you have some freedom there as to what to export - whether the
entity userspace submits to (is this ring in amdgpu?), or the entity
hardware actually executes on (your engine?).
We have somewhat similar setup in i915 and I decided to expose the
former. This makes it both (almost) match what our total metrics show
(engine performance counters exported via perf/pmu).
So in case of your video decode and encode rings which lead to the same
hw engine, that would mean exposing them as two entities, decode and encode.
But as said, my spec does prescribe that so it is up to implementations.
As long as it is useful for users as first port of enquiry for
performance problems I think it is fine.
Analogue would be hyper-threading from the CPU scheduling world and how
top(1) cannot distinguish why one core at 0% does not mean there is half
of the performance still one the table.
And then for a deeper down into performance more specialized GPU
profiling tools are required.
Regards,
Tvrtko