Quoting Tvrtko Ursulin (2019-04-29 15:12:23) > > On 25/04/2019 10:19, Chris Wilson wrote: > > static void virtual_submission_tasklet(unsigned long data) > > { > > struct virtual_engine * const ve = (struct virtual_engine *)data; > > const int prio = ve->base.execlists.queue_priority_hint; > > + intel_engine_mask_t mask; > > unsigned int n; > > > > + rcu_read_lock(); > > + mask = virtual_submission_mask(ve); > > + rcu_read_unlock(); > > + if (unlikely(!mask)) > > Is the rcu_lock think solely for the same protection against wedging in > submit_notify? No. We may still be in the rbtree of the physical engines and ve->request may be plucked out from underneath us as we read it. And in the time it takes to tracek, that request may have been executed, retired and freed. To prevent the dangling stale dereference, we use rcu_read_lock() here as we peek into the request, and spinlocks around the actual transfer to the execution backend. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx