On Fri, Oct 25, 2024 at 06:06:47PM +0200, Nirmoy Das wrote: > On 10/24/2024 7:22 PM, Matthew Brost wrote: > > On Thu, Oct 24, 2024 at 10:14:21AM -0700, John Harrison wrote: > > On 10/24/2024 08:18, Nirmoy Das wrote: > > Flush xe ordered_wq in case of ufence timeout which is observed > on LNL and that points to the recent scheduling issue with E-cores. > > This is similar to the recent fix: > commit e51527233804 ("drm/xe/guc/ct: Flush g2h worker in case of g2h > response timeout") and should be removed once there is E core > scheduling fix. > > v2: Add platform check(Himal) > s/__flush_workqueue/flush_workqueue(Jani) > > Cc: Badal Nilawar [1]<badal.nilawar@xxxxxxxxx> > Cc: Jani Nikula [2]<jani.nikula@xxxxxxxxx> > Cc: Matthew Auld [3]<matthew.auld@xxxxxxxxx> > Cc: John Harrison [4]<John.C.Harrison@xxxxxxxxx> > Cc: Himal Prasad Ghimiray [5]<himal.prasad.ghimiray@xxxxxxxxx> > Cc: Lucas De Marchi [6]<lucas.demarchi@xxxxxxxxx> > Cc: [7]<stable@xxxxxxxxxxxxxxx> # v6.11+ > Link: [8]https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2754 > Suggested-by: Matthew Brost [9]<matthew.brost@xxxxxxxxx> > Signed-off-by: Nirmoy Das [10]<nirmoy.das@xxxxxxxxx> > Reviewed-by: Matthew Brost [11]<matthew.brost@xxxxxxxxx> > --- > drivers/gpu/drm/xe/xe_wait_user_fence.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/drivers/gpu/drm/xe/xe_wait_user_fence.c b/drivers/gpu/drm/xe/xe_wai > t_user_fence.c > index f5deb81eba01..78a0ad3c78fe 100644 > --- a/drivers/gpu/drm/xe/xe_wait_user_fence.c > +++ b/drivers/gpu/drm/xe/xe_wait_user_fence.c > @@ -13,6 +13,7 @@ > #include "xe_device.h" > #include "xe_gt.h" > #include "xe_macros.h" > +#include "compat-i915-headers/i915_drv.h" > #include "xe_exec_queue.h" > static int do_compare(u64 addr, u64 value, u64 mask, u16 op) > @@ -155,6 +156,19 @@ int xe_wait_user_fence_ioctl(struct drm_device *dev, void * > data, > } > if (!timeout) { > + if (IS_LUNARLAKE(xe)) { > + /* > + * This is analogous to e51527233804 ("drm/xe/gu > c/ct: Flush g2h > + * worker in case of g2h response timeout") > + * > + * TODO: Drop this change once workqueue schedul > ing delay issue is > + * fixed on LNL Hybrid CPU. > + */ > + flush_workqueue(xe->ordered_wq); > > If we are having multiple instances of this workaround, can we wrap them up > in as 'LNL_FLUSH_WORKQUEUE(q)' or some such? Put the IS_LNL check inside the > macro and make it pretty obvious exactly where all the instances are by > having a single macro name to search for. > > > +1, I think Lucas is suggesting something similar to this on the chat to > make sure we don't lose track of removing these W/A when this gets > fixed. > > Matt > > Sounds good. I will add LNL_FLUSH_WORKQUEUE() and use that for all the > places we need this WA. > You will need 2 macros... - LNL_FLUSH_WORKQUEUE() which accepts xe_device, workqueue_struct - LNL_FLUSH_WORK() which accepts xe_device, work_struct Matt > Regards, > > Nirmoy > > > > John. > > > + err = do_compare(addr, args->value, args->mask, > args->op); > + if (err <= 0) > + break; > + } > err = -ETIME; > break; > } > > References > > 1. mailto:badal.nilawar@xxxxxxxxx > 2. mailto:jani.nikula@xxxxxxxxx > 3. mailto:matthew.auld@xxxxxxxxx > 4. mailto:John.C.Harrison@xxxxxxxxx > 5. mailto:himal.prasad.ghimiray@xxxxxxxxx > 6. mailto:lucas.demarchi@xxxxxxxxx > 7. mailto:stable@xxxxxxxxxxxxxxx > 8. https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2754 > 9. mailto:matthew.brost@xxxxxxxxx > 10. mailto:nirmoy.das@xxxxxxxxx > 11. mailto:matthew.brost@xxxxxxxxx