On Fri, Mar 22, 2024 at 10:02:09AM +0100, Thomas Hellström wrote: > They can actually complete out-of-order, so allocate a unique > fence context for each fence. > Sending to correct rev... Yes indeed these can complete out ordered on different xe_exec_queue but should be ordered within an xe_exec_queue. In addition to this patch I think we will need [1] too. This patch does LGTM though, with that: Reviewed-by: Matthew Brost <matthew.brost@xxxxxxxxx> [1] https://patchwork.freedesktop.org/patch/582006/?series=125608&rev=5 > Fixes: 5387e865d90e ("drm/xe: Add TLB invalidation fence after rebinds issued from execs") > Cc: Matthew Brost <matthew.brost@xxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> # v6.8+ > Signed-off-by: Thomas Hellström <thomas.hellstrom@xxxxxxxxxxxxxxx> > --- > drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c | 1 - > drivers/gpu/drm/xe/xe_gt_types.h | 7 ------- > drivers/gpu/drm/xe/xe_pt.c | 3 +-- > 3 files changed, 1 insertion(+), 10 deletions(-) > > diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c > index 25b4111097bc..93df2d7969b3 100644 > --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c > +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c > @@ -63,7 +63,6 @@ int xe_gt_tlb_invalidation_init(struct xe_gt *gt) > INIT_LIST_HEAD(>->tlb_invalidation.pending_fences); > spin_lock_init(>->tlb_invalidation.pending_lock); > spin_lock_init(>->tlb_invalidation.lock); > - gt->tlb_invalidation.fence_context = dma_fence_context_alloc(1); > INIT_DELAYED_WORK(>->tlb_invalidation.fence_tdr, > xe_gt_tlb_fence_timeout); > > diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h > index f6da2ad9719f..2143dffcaf11 100644 > --- a/drivers/gpu/drm/xe/xe_gt_types.h > +++ b/drivers/gpu/drm/xe/xe_gt_types.h > @@ -179,13 +179,6 @@ struct xe_gt { > * xe_gt_tlb_fence_timeout after the timeut interval is over. > */ > struct delayed_work fence_tdr; > - /** @tlb_invalidation.fence_context: context for TLB invalidation fences */ > - u64 fence_context; > - /** > - * @tlb_invalidation.fence_seqno: seqno to TLB invalidation fences, protected by > - * tlb_invalidation.lock > - */ > - u32 fence_seqno; > /** @tlb_invalidation.lock: protects TLB invalidation fences */ > spinlock_t lock; > } tlb_invalidation; > diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c > index 632c1919471d..d1b999dbc906 100644 > --- a/drivers/gpu/drm/xe/xe_pt.c > +++ b/drivers/gpu/drm/xe/xe_pt.c > @@ -1135,8 +1135,7 @@ static int invalidation_fence_init(struct xe_gt *gt, > spin_lock_irq(>->tlb_invalidation.lock); > dma_fence_init(&ifence->base.base, &invalidation_fence_ops, > >->tlb_invalidation.lock, > - gt->tlb_invalidation.fence_context, > - ++gt->tlb_invalidation.fence_seqno); > + dma_fence_context_alloc(1), 1); > spin_unlock_irq(>->tlb_invalidation.lock); > > INIT_LIST_HEAD(&ifence->base.link); > -- > 2.44.0 >