Quoting Tvrtko Ursulin (2019-09-30 14:47:26) > > On 30/09/2019 12:31, Chris Wilson wrote: > > +static int live_parallel_switch(void *arg) > > +{ > > + struct drm_i915_private *i915 = arg; > > + static int (* const func[])(void *arg) = { > > + __live_parallel_switch1, > > + __live_parallel_switchN, > > + NULL, > > + }; > > + struct i915_gem_context *ctx[2]; > > + struct parallel_switch *data; > > + int (* const *fn)(void *arg); > > + struct drm_file *file; > > + int err = 0; > > + int n; > > + > > + /* > > + * Check we can process switches on all engines simultaneously. > > + */ > > + > > + if (!DRIVER_CAPS(i915)->has_logical_contexts) > > + return 0; > > + > > + file = mock_file(i915); > > + if (IS_ERR(file)) > > + return PTR_ERR(file); > > + > > + data = kcalloc(I915_NUM_ENGINES, sizeof(*data), GFP_KERNEL); > > There is a little bit of mixing up I915_NUM_ENGINES and gem engines > (which contains the num_engines field) in this function. > > I think it would be better to limit to one - so maybe get the count from > gem engines? It can't change during selftest so don't have to have them > locked for the whole time. > > > + if (!data) > > mock_file_free > > > + return -ENOMEM; > > + > > + for (n = 0; n < ARRAY_SIZE(ctx); n++) { > > + struct i915_gem_engines_iter it; > > + struct intel_context *ce; > > + > > + mutex_lock(&i915->drm.struct_mutex); > > + ctx[n] = live_context(i915, file); > > + if (IS_ERR(ctx[n])) { > > + err = PTR_ERR(ctx[n]); > > + goto out_locked; > > + } > > + > > + for_each_gem_engine(ce, > > + i915_gem_context_lock_engines(ctx[n]), it) { > > + err = intel_context_pin(ce); > > + if (err) { > > + i915_gem_context_unlock_engines(ctx[n]); > > + goto out_locked; > > + } > > + data[ce->engine->legacy_idx].ce[n] = ce; > > IMHO a bit confusing to use legacy_idx - makes it sound like there is > some significance to the legacy part so why not just use engine->id? Default engine list with legacy_idx is nice and linear, with a cap of I915_NUM_ENGINES. Ok, I have a weirder plan... -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx