On Fri, Apr 30, 2021 at 6:18 AM Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote: > > > On 29/04/2021 15:54, Jason Ekstrand wrote: > > On Thu, Apr 29, 2021 at 3:04 AM Tvrtko Ursulin > > <tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote: > >> > >> > >> On 28/04/2021 18:24, Jason Ekstrand wrote: > >>> On Wed, Apr 28, 2021 at 10:55 AM Tvrtko Ursulin > >>> <tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote: > >>>> On 23/04/2021 23:31, Jason Ekstrand wrote: > >>>>> Instead of handling it like a context param, unconditionally set it when > >>>>> intel_contexts are created. This doesn't fix anything but does simplify > >>>>> the code a bit. > >>>>> > >>>>> Signed-off-by: Jason Ekstrand <jason@xxxxxxxxxxxxxx> > >>>>> --- > >>>>> drivers/gpu/drm/i915/gem/i915_gem_context.c | 43 +++---------------- > >>>>> .../gpu/drm/i915/gem/i915_gem_context_types.h | 4 -- > >>>>> drivers/gpu/drm/i915/gt/intel_context_param.h | 3 +- > >>>>> 3 files changed, 6 insertions(+), 44 deletions(-) > >>>>> > >>>>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c > >>>>> index 35bcdeddfbf3f..1091cc04a242a 100644 > >>>>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c > >>>>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c > >>>>> @@ -233,7 +233,11 @@ static void intel_context_set_gem(struct intel_context *ce, > >>>>> intel_engine_has_timeslices(ce->engine)) > >>>>> __set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags); > >>>>> > >>>>> - intel_context_set_watchdog_us(ce, ctx->watchdog.timeout_us); > >>>>> + if (IS_ACTIVE(CONFIG_DRM_I915_REQUEST_TIMEOUT) && > >>>>> + ctx->i915->params.request_timeout_ms) { > >>>>> + unsigned int timeout_ms = ctx->i915->params.request_timeout_ms; > >>>>> + intel_context_set_watchdog_us(ce, (u64)timeout_ms * 1000); > >>>> > >>>> Blank line between declarations and code please, or just lose the local. > >>>> > >>>> Otherwise looks okay. Slight change that same GEM context can now have a > >>>> mix of different request expirations isn't interesting I think. At least > >>>> the change goes away by the end of the series. > >>> > >>> In order for that to happen, I think you'd have to have a race between > >>> CREATE_CONTEXT and someone smashing the request_timeout_ms param via > >>> sysfs. Or am I missing something? Given that timeouts are really > >>> per-engine anyway, I don't think we need to care too much about that. > >> > >> We don't care, no. > >> > >> For completeness only - by the end of the series it is what you say. But > >> at _this_ point in the series though it is if modparam changes at any > >> point between context create and replacing engines. Which is a change > >> compared to before this patch, since modparam was cached in the GEM > >> context so far. So one GEM context was a single request_timeout_ms. > > > > I've added the following to the commit message: > > > > It also means that sync files exported from different engines on a > > SINGLE_TIMELINE context will have different fence contexts. This is > > visible to userspace if it looks at the obj_name field of > > sync_fence_info. > > > > How's that sound? > > Wrong thread but sounds good. > > I haven't looked into the fence merge logic apart from noticing context > is used there. So I'd suggest a quick look there on top, just to make > sure merging logic does not hold any surprises if contexts start to > differ. Probably just results with more inefficiency somewhere, in theory. Looked at it yesterday. It really does just create a fence array with all the fences. :-) --Jason _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel