Quoting Michał Winiarski (2017-10-25 21:00:19) > Pretty similar to what we have on execlists. > We're reusing most of the GEM code, however, due to GuC quirks we need a > couple of extra bits. > Preemption is implemented as GuC action, and actions can be pretty slow. > Because of that, we're using a mutex to serialize them. Since we're > requesting preemption from the tasklet, the task of creating a workitem > and wrapping it in GuC action is delegated to a worker. > > To distinguish that preemption has finished, we're using additional > piece of HWSP, and since we're not getting context switch interrupts, > we're also adding a user interrupt. > > The fact that our special preempt context has completed unfortunately > doesn't mean that we're ready to submit new work. We also need to wait > for GuC to finish its own processing. > > v2: Don't compile out the wait for GuC, handle workqueue flush on reset, > no need for ordered workqueue, put on a reviewer hat when looking at my own > patches (Chris) > Move struct work around in intel_guc, move user interruput outside of > conditional (Michał) > Keep ring around rather than chase though intel_context > > v3: Extract WA for flushing ggtt writes to a helper (Chris) > Keep work_struct in intel_guc rather than engine (Michał) > Use ordered workqueue for inject_preempt worker to avoid GuC quirks. > > v4: Drop now unused INTEL_GUC_PREEMPT_OPTION_IMMEDIATE (Daniele) > Drop stray newlines, use container_of for intel_guc in worker, > check for presence of workqueue when flushing it, rather than > enable_guc_submission modparam, reorder preempt postprocessing (Chris) > > Signed-off-by: Michał Winiarski <michal.winiarski@xxxxxxxxx> > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Jeff McGee <jeff.mcgee@xxxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Oscar Mateo <oscar.mateo@xxxxxxxxx> > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > --- > +#define GUC_PREEMPT_POSTPROCESS_DELAY_MS 10 > +static void wait_for_guc_preempt_report(struct intel_engine_cs *engine) > +{ > + struct intel_guc *guc = &engine->i915->guc; > + struct guc_shared_ctx_data *data = guc->shared_data_vaddr; > + struct guc_ctx_report *report = > + &data->preempt_ctx_report[engine->guc_id]; > + > + WARN_ON(wait_for_atomic(report->report_return_status == > + INTEL_GUC_REPORT_STATUS_COMPLETE, > + GUC_PREEMPT_POSTPROCESS_DELAY_MS)); > + /* > + * GuC is expecting that we're also going to clear the affected context > + * counter, let's also reset the return status to not depend on GuC > + * resetting it after recieving another preempt action > + */ > + report->affected_count = 0; > + report->report_return_status = INTEL_GUC_REPORT_STATUS_UNKNOWN; > +} > + > + The horror! ;) Looks like everything is in order. Preemptive Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> I shall ponder the question of whether we can stress these (guc/execlists) preemption paths more. We have checks that we can preempt and reorder in-flight requests; we have simple smoketests. What I feel is like we probably need more of the latter to find timing issues. (A new mode for gem_concurrent_blit? There's nothing like running a test for a few days to shake out timing bugs.) Unless I've missed a rule we can test in gem_exec_schedule. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx