On 07/05/2019 17:59, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2019-04-29 15:12:23)
On 25/04/2019 10:19, Chris Wilson wrote:
static void virtual_submission_tasklet(unsigned long data)
{
struct virtual_engine * const ve = (struct virtual_engine *)data;
const int prio = ve->base.execlists.queue_priority_hint;
+ intel_engine_mask_t mask;
unsigned int n;
+ rcu_read_lock();
+ mask = virtual_submission_mask(ve);
+ rcu_read_unlock();
+ if (unlikely(!mask))
Is the rcu_lock think solely for the same protection against wedging in
submit_notify?
No. We may still be in the rbtree of the physical engines and
ve->request may be plucked out from underneath us as we read it. And in
the time it takes to tracek, that request may have been executed,
retired and freed. To prevent the dangling stale dereference, we use
rcu_read_lock() here as we peek into the request, and spinlocks around
the actual transfer to the execution backend.
So it's not actually about ve->request as s member pointer, but the
request object itself. That could make sense, but then wouldn't you need
to hold the rcu_read_lock over the whole tasklet? There is another
ve->request read in the for loop just below, although not an actual
dereference. I guess I just answered it to myself. Okay, looks good then.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx