On 22/08/2018 13:49, Tvrtko Ursulin wrote:
On 21/08/2018 13:06, Joonas Lahtinen wrote:
Quoting Kukanova, Svetlana (2018-08-13 16:44:49)
Joonas, sorry for interfering; could you please explain more
regarding the
options for tracing scheduling events better than tracepoints?
After scheduling moves to GuC tools will have to switch to something
like
GuC-logging; but while kmd does scheduling isn't kernel-tracing the
best solution?
I know gpuvis is not the only attempt to use tracepoints for the same
purpose.
(there're trace.pl and S.E.A. and of course VTune though it probably
is not
considered to be existing as it's not open source).
And assuming this movement towards GuC is it not too late to invent a
completely new way to provide tools with scheduling info from kmd?
Could we just improve the existing way and let it live its last
years\months?
Hi,
You actually mentioned the prime reason why we should not go and
hastily make tracepoints a stable uAPI with regards to scheduling
information.
The scheduler's nature will be evolving when some of the scheduling
decisions are moved to GuC and the way how we get the information
will be changing at that point, so tracepoints will indeed be a
very bad mechanism for providing the information.
The kernel scheduler is definitely not going anywhere with the
introduction of more hardware scheduling capabilities, so it is a
misconception to think that the interface would need to be completely
different for when GuC is enabled.
On the last paragraph - even with the today's GuC i915 already loses
visibility of CSB interrupts. So there is already a big difference in
semantics of what request_in and request_out tracepoints mean. Put
preemption into the picture and we just don't know any more when
something started executing on the GPU, when it got preempted,
re-submitted etc. So I think it is fair to say that moving more of
scheduling into the GuC creates a problem for tools which want to
represent request execution timelines.
P.S. To clarify - which is exactly why we marked those tracpoints as low
level and why it is problematic to rely on them.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx