Re: [Intel-gfx] [PATCH 03/21] drm/i915/gem: Set the watchdog timeout directly in intel_context_set_gem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 29, 2021 at 3:04 AM Tvrtko Ursulin
<tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote:
>
>
> On 28/04/2021 18:24, Jason Ekstrand wrote:
> > On Wed, Apr 28, 2021 at 10:55 AM Tvrtko Ursulin
> > <tvrtko.ursulin@xxxxxxxxxxxxxxx> wrote:
> >> On 23/04/2021 23:31, Jason Ekstrand wrote:
> >>> Instead of handling it like a context param, unconditionally set it when
> >>> intel_contexts are created.  This doesn't fix anything but does simplify
> >>> the code a bit.
> >>>
> >>> Signed-off-by: Jason Ekstrand <jason@xxxxxxxxxxxxxx>
> >>> ---
> >>>    drivers/gpu/drm/i915/gem/i915_gem_context.c   | 43 +++----------------
> >>>    .../gpu/drm/i915/gem/i915_gem_context_types.h |  4 --
> >>>    drivers/gpu/drm/i915/gt/intel_context_param.h |  3 +-
> >>>    3 files changed, 6 insertions(+), 44 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>> index 35bcdeddfbf3f..1091cc04a242a 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> >>> @@ -233,7 +233,11 @@ static void intel_context_set_gem(struct intel_context *ce,
> >>>            intel_engine_has_timeslices(ce->engine))
> >>>                __set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
> >>>
> >>> -     intel_context_set_watchdog_us(ce, ctx->watchdog.timeout_us);
> >>> +     if (IS_ACTIVE(CONFIG_DRM_I915_REQUEST_TIMEOUT) &&
> >>> +         ctx->i915->params.request_timeout_ms) {
> >>> +             unsigned int timeout_ms = ctx->i915->params.request_timeout_ms;
> >>> +             intel_context_set_watchdog_us(ce, (u64)timeout_ms * 1000);
> >>
> >> Blank line between declarations and code please, or just lose the local.
> >>
> >> Otherwise looks okay. Slight change that same GEM context can now have a
> >> mix of different request expirations isn't interesting I think. At least
> >> the change goes away by the end of the series.
> >
> > In order for that to happen, I think you'd have to have a race between
> > CREATE_CONTEXT and someone smashing the request_timeout_ms param via
> > sysfs.  Or am I missing something?  Given that timeouts are really
> > per-engine anyway, I don't think we need to care too much about that.
>
> We don't care, no.
>
> For completeness only - by the end of the series it is what you say. But
> at _this_ point in the series though it is if modparam changes at any
> point between context create and replacing engines. Which is a change
> compared to before this patch, since modparam was cached in the GEM
> context so far. So one GEM context was a single request_timeout_ms.

I've added the following to the commit message:

It also means that sync files exported from different engines on a
SINGLE_TIMELINE context will have different fence contexts.  This is
visible to userspace if it looks at the obj_name field of
sync_fence_info.

How's that sound?

--Jason

> Regards,
>
> Tvrtko
>
> > --Jason
> >
> >> Regards,
> >>
> >> Tvrtko
> >>
> >>> +     }
> >>>    }
> >>>
> >>>    static void __free_engines(struct i915_gem_engines *e, unsigned int count)
> >>> @@ -792,41 +796,6 @@ static void __assign_timeline(struct i915_gem_context *ctx,
> >>>        context_apply_all(ctx, __apply_timeline, timeline);
> >>>    }
> >>>
> >>> -static int __apply_watchdog(struct intel_context *ce, void *timeout_us)
> >>> -{
> >>> -     return intel_context_set_watchdog_us(ce, (uintptr_t)timeout_us);
> >>> -}
> >>> -
> >>> -static int
> >>> -__set_watchdog(struct i915_gem_context *ctx, unsigned long timeout_us)
> >>> -{
> >>> -     int ret;
> >>> -
> >>> -     ret = context_apply_all(ctx, __apply_watchdog,
> >>> -                             (void *)(uintptr_t)timeout_us);
> >>> -     if (!ret)
> >>> -             ctx->watchdog.timeout_us = timeout_us;
> >>> -
> >>> -     return ret;
> >>> -}
> >>> -
> >>> -static void __set_default_fence_expiry(struct i915_gem_context *ctx)
> >>> -{
> >>> -     struct drm_i915_private *i915 = ctx->i915;
> >>> -     int ret;
> >>> -
> >>> -     if (!IS_ACTIVE(CONFIG_DRM_I915_REQUEST_TIMEOUT) ||
> >>> -         !i915->params.request_timeout_ms)
> >>> -             return;
> >>> -
> >>> -     /* Default expiry for user fences. */
> >>> -     ret = __set_watchdog(ctx, i915->params.request_timeout_ms * 1000);
> >>> -     if (ret)
> >>> -             drm_notice(&i915->drm,
> >>> -                        "Failed to configure default fence expiry! (%d)",
> >>> -                        ret);
> >>> -}
> >>> -
> >>>    static struct i915_gem_context *
> >>>    i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
> >>>    {
> >>> @@ -871,8 +840,6 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
> >>>                intel_timeline_put(timeline);
> >>>        }
> >>>
> >>> -     __set_default_fence_expiry(ctx);
> >>> -
> >>>        trace_i915_context_create(ctx);
> >>>
> >>>        return ctx;
> >>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> >>> index 5ae71ec936f7c..676592e27e7d2 100644
> >>> --- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> >>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h
> >>> @@ -153,10 +153,6 @@ struct i915_gem_context {
> >>>         */
> >>>        atomic_t active_count;
> >>>
> >>> -     struct {
> >>> -             u64 timeout_us;
> >>> -     } watchdog;
> >>> -
> >>>        /**
> >>>         * @hang_timestamp: The last time(s) this context caused a GPU hang
> >>>         */
> >>> diff --git a/drivers/gpu/drm/i915/gt/intel_context_param.h b/drivers/gpu/drm/i915/gt/intel_context_param.h
> >>> index dffedd983693d..0c69cb42d075c 100644
> >>> --- a/drivers/gpu/drm/i915/gt/intel_context_param.h
> >>> +++ b/drivers/gpu/drm/i915/gt/intel_context_param.h
> >>> @@ -10,11 +10,10 @@
> >>>
> >>>    #include "intel_context.h"
> >>>
> >>> -static inline int
> >>> +static inline void
> >>>    intel_context_set_watchdog_us(struct intel_context *ce, u64 timeout_us)
> >>>    {
> >>>        ce->watchdog.timeout_us = timeout_us;
> >>> -     return 0;
> >>>    }
> >>>
> >>>    #endif /* INTEL_CONTEXT_PARAM_H */
> >>>
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux