On Mon, Jul 12, 2021 at 7:57 PM John Harrison <john.c.harrison@xxxxxxxxx> wrote: > > On 7/9/2021 20:00, Matthew Brost wrote: > > On Fri, Jul 09, 2021 at 03:53:29PM -0700, John Harrison wrote: > >> On 6/24/2021 00:04, Matthew Brost wrote: > >>> Disable engine barriers for unpinning with GuC. This feature isn't > >>> needed with the GuC as it disables context scheduling before unpinning > >>> which guarantees the HW will not reference the context. Hence it is > >>> not necessary to defer unpinning until a kernel context request > >>> completes on each engine in the context engine mask. > >>> > >>> Cc: John Harrison <john.c.harrison@xxxxxxxxx> > >>> Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx> > >>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@xxxxxxxxx> > >>> --- > >>> drivers/gpu/drm/i915/gt/intel_context.c | 2 +- > >>> drivers/gpu/drm/i915/gt/intel_context.h | 1 + > >>> drivers/gpu/drm/i915/gt/selftest_context.c | 10 ++++++++++ > >>> 3 files changed, 12 insertions(+), 1 deletion(-) > >>> > >>> diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c > >>> index 1499b8aace2a..7f97753ab164 100644 > >>> --- a/drivers/gpu/drm/i915/gt/intel_context.c > >>> +++ b/drivers/gpu/drm/i915/gt/intel_context.c > >>> @@ -80,7 +80,7 @@ static int intel_context_active_acquire(struct intel_context *ce) > >>> __i915_active_acquire(&ce->active); > >>> - if (intel_context_is_barrier(ce)) > >>> + if (intel_context_is_barrier(ce) || intel_engine_uses_guc(ce->engine)) > >>> return 0; > >> Would be better to have a scheduler flag to say whether barriers are > >> required or not. That would prevent polluting front end code with back end > >> details. > >> > > I guess an engine flag is slightly better but I still don't love that > > as we have to test if the context is a barrier (kernel context) and then > > call a function that is basically backend specific after. IMO we really > > need to push all of this to a vfunc. If you really want me to make this > > an engine flag I can, but in the end it just seems like that will > > trash the code (adding an engine flag just to remove it). I think this > > is just a clean up we write down, and figure out a bit later as nothing > > is functionally wrong + quite clear that it is something that should be > > cleaned up. > > > > Matt > Flag, vfunc, whatever. I just mean that it would be better to abstract > it out in some manner. Maybe a flag/vfunc on the ce object? That way it > would swallow the 'ignore kernel contexts' test as well. But yes, > probably best to add it to the todo list and move on as it is not going > to be a two minute quick fix. I've added a comment to the Jira, so... The plan is: - merge guc backend - convert over to drm/scheduler as a proper interface between higher levels and the scheduler backend - shovel as much as reasonable of the execlist specifics into the execlist backend Right now our frontend code is essentially designed to assume the execlist backend is the true way to build a scheduler, and everything else is a special case. We can't reasonable fix this by sprinkling lots of vfuns all over the place, and hence we imo shouldn't try, at least not until the big picture is in much better shape. -Daniel > Reviewed-by: John Harrison <John.C.Harrison@xxxxxxxxx> > > > > > >> John. > >> > >> > >>> /* Preallocate tracking nodes */ > >>> diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h > >>> index 8a7199afbe61..a592a9605dc8 100644 > >>> --- a/drivers/gpu/drm/i915/gt/intel_context.h > >>> +++ b/drivers/gpu/drm/i915/gt/intel_context.h > >>> @@ -16,6 +16,7 @@ > >>> #include "intel_engine_types.h" > >>> #include "intel_ring_types.h" > >>> #include "intel_timeline_types.h" > >>> +#include "uc/intel_guc_submission.h" > >>> #define CE_TRACE(ce, fmt, ...) do { \ > >>> const struct intel_context *ce__ = (ce); \ > >>> diff --git a/drivers/gpu/drm/i915/gt/selftest_context.c b/drivers/gpu/drm/i915/gt/selftest_context.c > >>> index 26685b927169..fa7b99a671dd 100644 > >>> --- a/drivers/gpu/drm/i915/gt/selftest_context.c > >>> +++ b/drivers/gpu/drm/i915/gt/selftest_context.c > >>> @@ -209,7 +209,13 @@ static int __live_active_context(struct intel_engine_cs *engine) > >>> * This test makes sure that the context is kept alive until a > >>> * subsequent idle-barrier (emitted when the engine wakeref hits 0 > >>> * with no more outstanding requests). > >>> + * > >>> + * In GuC submission mode we don't use idle barriers and we instead > >>> + * get a message from the GuC to signal that it is safe to unpin the > >>> + * context from memory. > >>> */ > >>> + if (intel_engine_uses_guc(engine)) > >>> + return 0; > >>> if (intel_engine_pm_is_awake(engine)) { > >>> pr_err("%s is awake before starting %s!\n", > >>> @@ -357,7 +363,11 @@ static int __live_remote_context(struct intel_engine_cs *engine) > >>> * on the context image remotely (intel_context_prepare_remote_request), > >>> * which inserts foreign fences into intel_context.active, does not > >>> * clobber the idle-barrier. > >>> + * > >>> + * In GuC submission mode we don't use idle barriers. > >>> */ > >>> + if (intel_engine_uses_guc(engine)) > >>> + return 0; > >>> if (intel_engine_pm_is_awake(engine)) { > >>> pr_err("%s is awake before starting %s!\n", > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch