Quoting Daniel Vetter (2017-08-21 17:16:24) > On Sat, Aug 19, 2017 at 01:05:58PM +0100, Chris Wilson wrote: > > This is the same bug as we fixed in commit f6cd7daecff5 ("drm: Release > > driver references to handle before making it available again"), but now > > the exposure is via the PRIME lookup tables. If we remove the > > object/handle from the PRIME lut, then a new request for the same > > object/fd will generate a new handle, thus for a short window that > > object is known to userspace by two different handles. Fix this by > > releasing the driver tracking before PRIME. > > > > Fixes: 0ff926c7d4f0 ("drm/prime: add exported buffers to current fprivs > > imported buffer list (v2)") > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: David Airlie <airlied@xxxxxxxx> > > Cc: Daniel Vetter <daniel.vetter@xxxxxxxxx> > > Cc: Rob Clark <robdclark@xxxxxxxxx> > > Cc: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx> > > Cc: Thierry Reding <treding@xxxxxxxxxx> > > Cc: stable@xxxxxxxxxxxxxxx > > Do we have an evil igt for this? I guess since the old one didn't have > one, this new race is also hard to reproduce ... The old one we did hit in igt (gem_concurrent_blit), but only by virtue of it running for long enough to spot the race (ending with two handles to the same object in an execbuf call). This one requires us racing dma-buf import/close vs execbuf on the same handles. It's the type of race the gem_close_race is looking for (except that it doesn't cover dmabuf yet), but we are reliant on having a means to detect the race. At the moment, we would detect it if you ended up with two handles to the same object within the execbuf (which is plausible as you can currently create that second handle before we mark the first as closed, but the race will require some unfair queueing on struct_mutex), or if we end up with two handles to the vma on close. Hmm, the one way to make the race easier to hit is to add a sleep to i915_gem_close_object before we take the struct_mutex. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx