Quoting Tvrtko Ursulin (2018-08-22 15:49:52) > > On 21/08/2018 13:06, Joonas Lahtinen wrote: > > Quoting Kukanova, Svetlana (2018-08-13 16:44:49) > >> Joonas, sorry for interfering; could you please explain more regarding the > >> options for tracing scheduling events better than tracepoints? > >> After scheduling moves to GuC tools will have to switch to something like > >> GuC-logging; but while kmd does scheduling isn't kernel-tracing the best solution? > >> I know gpuvis is not the only attempt to use tracepoints for the same purpose. > >> (there're trace.pl and S.E.A. and of course VTune though it probably is not > >> considered to be existing as it's not open source). > >> And assuming this movement towards GuC is it not too late to invent a > >> completely new way to provide tools with scheduling info from kmd? > >> Could we just improve the existing way and let it live its last years\months? > > > > Hi, > > > > You actually mentioned the prime reason why we should not go and > > hastily make tracepoints a stable uAPI with regards to scheduling > > information. > > > > The scheduler's nature will be evolving when some of the scheduling > > decisions are moved to GuC and the way how we get the information > > will be changing at that point, so tracepoints will indeed be a > > very bad mechanism for providing the information. > > > > The kernel scheduler is definitely not going anywhere with the > > introduction of more hardware scheduling capabilities, so it is a > > misconception to think that the interface would need to be completely > > different for when GuC is enabled. To clarify, I meant to underline that there is not going to be a steep switching point where a transition from interface A to B, which Svetlana referred to, would happen naturally. The introduced interface will have to provide the information for years and kernel versions to come, and we already have a some data that tracepoints may not be the format of choice due to GuC. > On the last paragraph - even with the today's GuC i915 already loses > visibility of CSB interrupts. So there is already a big difference in > semantics of what request_in and request_out tracepoints mean. Put > preemption into the picture and we just don't know any more when > something started executing on the GPU, when it got preempted, > re-submitted etc. So I think it is fair to say that moving more of > scheduling into the GuC creates a problem for tools which want to > represent request execution timelines. Yes, for tools that depend on the tracepoints. That's why it is most likely best to introduce the information in some other form, but I am starting to sound like a broken record already :) Regards, Joonas > > Regards, > > Tvrtko _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx