Re: [PATCH] drm/i915/gt: Defend against concurrent updates to execlists->active

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Mika Kuoppala (2020-03-09 15:34:40)
> Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes:
> 
> > [  206.875637] BUG: KCSAN: data-race in __i915_schedule+0x7fc/0x930 [i915]
> > [  206.875654]
> > [  206.875666] race at unknown origin, with read to 0xffff8881f7644480 of 8 bytes by task 703 on cpu 3:
> > [  206.875901]  __i915_schedule+0x7fc/0x930 [i915]
> > [  206.876130]  __bump_priority+0x63/0x80 [i915]
> > [  206.876361]  __i915_sched_node_add_dependency+0x258/0x300 [i915]
> > [  206.876593]  i915_sched_node_add_dependency+0x50/0xa0 [i915]
> > [  206.876824]  i915_request_await_dma_fence+0x1da/0x530 [i915]
> > [  206.877057]  i915_request_await_object+0x2fe/0x470 [i915]
> > [  206.877287]  i915_gem_do_execbuffer+0x45dc/0x4c20 [i915]
> > [  206.877517]  i915_gem_execbuffer2_ioctl+0x2c3/0x580 [i915]
> > [  206.877535]  drm_ioctl_kernel+0xe4/0x120
> > [  206.877549]  drm_ioctl+0x297/0x4c7
> > [  206.877563]  ksys_ioctl+0x89/0xb0
> > [  206.877577]  __x64_sys_ioctl+0x42/0x60
> > [  206.877591]  do_syscall_64+0x6e/0x2c0
> > [  206.877606]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> >
> > References: https://gitlab.freedesktop.org/drm/intel/issues/1318
> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > ---
> >  drivers/gpu/drm/i915/gt/intel_engine.h | 12 +++++++++++-
> >  1 file changed, 11 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
> > index 29c8c03c5caa..f267f51c457c 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_engine.h
> > +++ b/drivers/gpu/drm/i915/gt/intel_engine.h
> > @@ -107,7 +107,17 @@ execlists_num_ports(const struct intel_engine_execlists * const execlists)
> >  static inline struct i915_request *
> >  execlists_active(const struct intel_engine_execlists *execlists)
> >  {
> > -     return *READ_ONCE(execlists->active);
> > +     struct i915_request * const *cur = READ_ONCE(execlists->active);
> > +     struct i915_request * const *old;
> > +     struct i915_request *active;
> > +
> > +     do {
> > +             old = cur;
> > +             active = READ_ONCE(*cur);
> > +             cur = READ_ONCE(execlists->active);
> > +     } while (cur != old);
> > +
> > +     return active;
> 
> The updated side is scary. We are updating the execlists->active
> in two phases and handling the array copying in between.
> 
> as WRITE_ONCE only guarantees ordering inside one context, due to
> it is for compiler only, it makes me very suspicious about
> how the memcpy of pending->inflight might unravel between two cpus.
> 
> smb_store_mb(execlists->active, execlists->pending);
> memcpy(inflight, pending)
> smb_wmb();
> smb_store_mb(execlists->active, execlists->inflight);
> smb_store_mb(execlists->pending[0], NULL);

Not quite. You've overkill on the mb there.

If you want to be pedantic,

WRITE_ONCE(active, pending);
smp_wmb();

memcpy(inflight, pending);
smp_wmb();
WRITE_ONCE(active, inflight);

The update of pending is not part of this sequence.

But do we need that, and I still think we do not.

> This in paired with:
> 
> active = READ_ONCE(*cur);
> smb_rmb();
> cur = READ_ONCE(execlists->active);
> 
> With this, it should not matter at which point the execlists->active
> is sampled as the pending would be guaranteed to be
> immutable if it sampled early and inflight immutable if it
> sampled late?

Simply because we don't care about the sampling, just that the read
dependency gives us a valid pointer. (We are not looking at a snapshot
of several reads, but a _single_ read and the data dependency from
that.)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux