On Thu, 15 Sep 2022 15:49:27 -0700, Umesh Nerlige Ramappa wrote: > > On Wed, Sep 14, 2022 at 04:13:41PM -0700, Umesh Nerlige Ramappa wrote: > > On Wed, Sep 14, 2022 at 03:26:15PM -0700, Umesh Nerlige Ramappa wrote: > >> On Tue, Sep 06, 2022 at 09:39:33PM +0300, Lionel Landwerlin wrote: > >>> On 06/09/2022 20:39, Umesh Nerlige Ramappa wrote: > >>>> On Tue, Sep 06, 2022 at 05:33:00PM +0300, Lionel Landwerlin wrote: > >>>>> On 23/08/2022 23:41, Umesh Nerlige Ramappa wrote: > >>>>>> With GuC mode of submission, GuC is in control of defining the > >>>>>> context id field > >>>>>> that is part of the OA reports. To filter reports, UMD and KMD must > >>>>>> know what sw > >>>>>> context id was chosen by GuC. There is not interface between KMD and > >>>>>> GuC to > >>>>>> determine this, so read the upper-dword of EXECLIST_STATUS to > >>>>>> filter/squash OA > >>>>>> reports for the specific context. > >>>>>> > >>>>>> Signed-off-by: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@xxxxxxxxx> > >>>>> > >>>>> > >>>>> I assume you checked with GuC that this doesn't change as the context > >>>>> is running? > >>>> > >>>> Correct. > >>>> > >>>>> > >>>>> With i915/execlist submission mode, we had to ask i915 to pin the > >>>>> sw_id/ctx_id. > >>>>> > >>>> > >>>> From GuC perspective, the context id can change once KMD de-registers > >>>> the context and that will not happen while the context is in use. > >>>> > >>>> Thanks, > >>>> Umesh > >>> > >>> > >>> Thanks Umesh, > >>> > >>> > >>> Maybe I should have been more precise in my question : > >>> > >>> > >>> Can the ID change while the i915-perf stream is opened? > >>> > >>> Because the ID not changing while the context is running makes sense. > >>> > >>> But since the number of available IDs is limited to 2k or something on > >>> Gfx12, it's possible the GuC has to reuse IDs if too many apps want to > >>> run during the period of time while i915-perf is active and filtering. > >>> > >> > >> available guc ids are 64k with 4k reserved for multi-lrc, so GuC may > >> have to reuse ids once 60k ids are used up. > > > > Spoke to the GuC team again and if there are a lot of contexts (> 60K) > > running, there is a possibility of the context id being recycled. In that > > case, the capture would be broken. I would track this as a separate JIRA > > and follow up on a solution. > > > > From OA use case perspective, are we interested in monitoring just one > > hardware context? If we make sure this context is not stolen, are we > > good? > > > > + John > > Based on John's inputs - if a context is pinned, then KMD does not steal > it's id. It would just look for something else or wait for a context to be > available (pin count 0 I believe). > > Since we pin the context for the duration of the OA use case, we should be > good here. Since this appears to be true I am thinking of okay'ing this patch rather than define a new interface with GuC for this. Let me know if there are any objections about this. Thanks. -- Ashutosh > >>> -Lionel > >>> > >>> > >>>> > >>>>> > >>>>> If that's not the case then filtering is broken. > >>>>> > >>>>> > >>>>> -Lionel > >>>>> > >>>>>