Quoting Mika Kuoppala (2019-08-08 13:43:25) > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > @@ -1336,43 +1347,49 @@ static int __intel_engines_record_defaults(struct drm_i915_private *i915) > > */ > > err = i915_vma_unbind(state); > > if (err) > > - goto err_active; > > + goto out; > > > > i915_gem_object_lock(state->obj); > > err = i915_gem_object_set_to_cpu_domain(state->obj, false); > > Ok this has the implicit wait on it. Was confused for a moment > that how can we fetch the state async. There's also a global wait_for_idle just above. (I am tempted to streamline this and run it in parallel using kworkers, but first we need to finish refactoring the intel_gt_pm.) > The path ahead that no need for global kernel context but > engines setup their default for cloning new ones? intel_engine_cs create their own kernel_context, which are not to be (directly) used by userspace. GEM context creates a fresh set of per-engine logical contexts on each creation. (When we stop creating a i915->kernel_context, we will remove the duplication -- but now as we never use them, we never allocate state, so not much wastage by allocating engine->kernel_context directly.) > Can't poke holes on this. Didn't get into bottom of how the > active tracking grabs the ce reference on pinning but > everything stays the same on that front so was just wandering > around in other unknown paths. It's active references for everybody. We track context activity by i915_active, intel_context_pin() marks it as active and our intel_context_active callback takes a reference. intel_context_unpin() releases the i915_active on the next idle request, at which time we call the intel_context_retire callback and drop the reference. Thus for as long as we suspect the HW has access to the context (i.e. pin until next context switch) we keep the context state alive and bound. We had to resort to such complexity because we have to treat submission as a black box -- we cannot presume the order of execution (guc). However, it ties it all together because we have the same sort of problem with tracking vma and buffer activity. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx