On Thu, Jun 10, 2021 at 6:30 AM Daniel Vetter <daniel.vetter@xxxxxxxx> wrote: > > On Thu, Jun 10, 2021 at 11:39 AM Christian König > <christian.koenig@xxxxxxx> wrote: > > Am 10.06.21 um 11:29 schrieb Tvrtko Ursulin: > > > On 09/06/2021 22:29, Jason Ekstrand wrote: > > >> Ever since 0eafec6d3244 ("drm/i915: Enable lockless lookup of request > > >> tracking via RCU"), the i915 driver has used SLAB_TYPESAFE_BY_RCU (it > > >> was called SLAB_DESTROY_BY_RCU at the time) in order to allow RCU on > > >> i915_request. As nifty as SLAB_TYPESAFE_BY_RCU may be, it comes with > > >> some serious disclaimers. In particular, objects can get recycled while > > >> RCU readers are still in-flight. This can be ok if everyone who touches > > >> these objects knows about the disclaimers and is careful. However, > > >> because we've chosen to use SLAB_TYPESAFE_BY_RCU for i915_request and > > >> because i915_request contains a dma_fence, we've leaked > > >> SLAB_TYPESAFE_BY_RCU and its whole pile of disclaimers to every driver > > >> in the kernel which may consume a dma_fence. > > > > > > I don't think the part about leaking is true... > > > > > >> We've tried to keep it somewhat contained by doing most of the hard work > > >> to prevent access of recycled objects via dma_fence_get_rcu_safe(). > > >> However, a quick grep of kernel sources says that, of the 30 instances > > >> of dma_fence_get_rcu*, only 11 of them use dma_fence_get_rcu_safe(). > > >> It's likely there bear traps in DRM and related subsystems just waiting > > >> for someone to accidentally step in them. > > > > > > ...because dma_fence_get_rcu_safe apears to be about whether the > > > *pointer* to the fence itself is rcu protected, not about the fence > > > object itself. > > > > Yes, exactly that. The fact that both of you think this either means that I've completely missed what's going on with RCUs here (possible but, in this case, I think unlikely) or RCUs on dma fences should scare us all. Yes, it protects against races on the dma_fence pointer itself. However, whether or not that dma_fence pointer lives in RCU-protected memory is immaterial AFAICT. It also does magic to deal with SLAB_TYPESAFE_BY_RCU. Let's walk through it. Please tell me if/where I go off the rails. First, let's set the scenario: The race this is protecting us against (I think) is where someone else comes along and swaps out the pointer we're trying to fetch for NULL or a different one and then drops the last reference. First, before we get to dma_fence_get_rcu_safe(), the caller has taken an RCU read lock. Then we get into the function fence = rcu_dereference(*fencep); if (!fence) return NULL; First, we dereference fencep and grab the pointer. There's an rcu_dereference() here which does the usual RCU magic (which I don't fully understand yet) to turn an __rcu pointer into a "real" pointer. It's possible that the pointer is NULL, if so we bail. We may have lost the race or it could be the the pointer was NULL the whole time. Doesn't matter. if (!dma_fence_get_rcu(fence)) continue; This attempts to get a reference and, if it fails continues. More on the continue later. For now, let's dive into dma_fence_get() if (kref_get_unless_zero(&fence->refcount)) return fence; else return NULL; So we try to get a reference unless it's zero. This is a pretty standard pattern and, if the dma_fence was freed with kfree_rcu(), would be all we need. If the reference count on the dma_fence drops to 0 and then the dma_fence is freed with kfree_rcu, we're guaranteed that there is an RCU grace period between when the reference count hits 0 and the memory is reclaimed. Since all this happens inside the RCU read lock, if we raced with someone attempting to swap out the pointer and drop the reference count to zero, we have one of two cases: 1. We get the old pointer but successfully take a reference. In this case, it's the same as if we were called a few cycles earlier and straight-up won the race. We get the old pointer and, because we now have a reference, the object is never freed. 2. We get the old pointer but refcount is already zero by the time we get here. In this case, kref_get_unless_zero() returns false and dma_fence_get_rcu() returns NULL. If these were the only two cases we cared about, all of dma_fence_get_rcu_safe() could be implemented as follows: static inline struct dma_fence * dma_fence_get_rcu_safe(struct dma_fence **fencep) { struct dma_fence *fence; fence = rcu_dereference(*fencep); if (fence) fence = dma_fence_get_rcu(fence); return fence; } and we we'd be done. The case the above code doesn't handle is if the thing we're racing with swaps it to a non-NULL pointer. To handle that case, we throw a loop around the whole thing as follows: static inline struct dma_fence * dma_fence_get_rcu_safe(struct dma_fence **fencep) { struct dma_fence *fence; do { fence = rcu_dereference(*fencep); if (!fence) return NULL; fence = dma_fence_get_rcu(fence); } while (!fence); return fence; } Ok, great, we've got an implementation, right? Unfortunately, this is where SLAB_TYPESAFE_BY_RCU crashes the party. The giant disclaimer about SLAB_TYPESAFE_BY_RCU is that memory gets recycled immediately and doesn't wait for an RCU grace period. You're guaranteed that memory exists at that pointer so you won't get a nasty SEGFAULT and you're guaranteed that the memory is still a dma_fence, but you're not guaranteed anything else. In particular, there's a 3rd case: 3. We get an old pointer but it's been recycled and points to a totally different dma_fence whose reference count is non-zero. In this case, rcu_dereference returns non-null and kref_get_unless_zero() succeeds but we still managed to end up with the wrong fence. To deal with 3, we do this: /* The atomic_inc_not_zero() inside dma_fence_get_rcu() * provides a full memory barrier upon success (such as now). * This is paired with the write barrier from assigning * to the __rcu protected fence pointer so that if that * pointer still matches the current fence, we know we * have successfully acquire a reference to it. If it no * longer matches, we are holding a reference to some other * reallocated pointer. This is possible if the allocator * is using a freelist like SLAB_TYPESAFE_BY_RCU where the * fence remains valid for the RCU grace period, but it * may be reallocated. When using such allocators, we are * responsible for ensuring the reference we get is to * the right fence, as below. */ if (fence == rcu_access_pointer(*fencep)) return rcu_pointer_handoff(fence); dma_fence_put(fence); We dereference fencep one more time and check to ensure that the pointer we fetched at the start still matches. There are some serious memory barrier tricks going on here. In particular, we're depending on the fact that kref_get_unless_zero() does an atomic which means a memory barrier between when the other thread we're racing with swapped out the pointer and when the atomic happened. Assuming that the other thread swapped out the pointer BEFORE dropping the reference, we can detect the recycle race with this pointer check. If this last check succeeds, we return the fence. If it fails, then we ended up with the wrong dma_fence and we drop the reference we acquired above and try again. Again, the important issue here that causes problems is that there's no RCU grace period between the kref hitting zero and the dma_fence being recycled. If a dma_fence is freed with kfree_rcu(), we have such a grace period and it's fine. If we recycling, we can end up in all sorts of weird corners if we're not careful to ensure that the fence we got is the fence we think we got. Before I move on, there's one more important point: This can happen without SLAB_TYPESAFE_BY_RCU. Really, any dma_fence recycling scheme which doesn't ensure an RCU grace period between keref->zero and recycle will run afoul of this. SLAB_TYPESAFE_BY_RCU just happens to be the way i915 gets into this mess. > We do leak, and badly. Any __rcu protected fence pointer where a > shared fence could show up is affected. And the point of dma_fence is > that they're shareable, and we're inventing ever more ways to do so > (sync_file, drm_syncobj, implicit fencing maybe soon with > import/export ioctl on top, in/out fences in CS ioctl, atomic ioctl, > ...). > > So without a full audit anything that uses the following pattern is > probably busted: > > rcu_read_lock(); > fence = rcu_dereference(); > fence = dma_fence_get_rcu(); > rcu_read_lock(); > > /* use the fence now that we acquired a full reference */ > > And I don't mean "you might wait a bit too much" busted, but "this can > lead to loops in the dma_fence dependency chain, resulting in > deadlocks" kind of busted. Yup. > What's worse, the standard rcu lockless > access pattern is also busted completely: > > rcu_read_lock(); > fence = rcu_derefence(); > /* locklessly check the state of fence */ > rcu_read_unlock(); Yeah, this one's broken too. It depends on what you're doing with that state just how busted and what that breakage costs you but it's definitely busted. > because once you have TYPESAFE_BY_RCU rcu_read_lock doesn't prevent a > use-after-free anymore. The only thing it guarantees is that your > fence pointer keeps pointing at either freed memory, or a fence, but > nothing else. You have to wrap your rcu_derefence and code into a > seqlock of some kind, either a real one like dma_resv, or an > open-coded one like dma_fence_get_rcu_safe uses. And yes the latter is > a specialized seqlock, except it fails to properly document in > comments where all the required barriers are. > > tldr; all the code using dma_fence_get_rcu needs to be assumed to be broken. > > Heck this is fragile and tricky enough that i915 shot its own leg off > routinely (there's a bugfix floating around just now), so not even > internally we're very good at getting this right. > > > > If one has a stable pointer to a fence dma_fence_get_rcu is I think > > > enough to deal with SLAB_TYPESAFE_BY_RCU used by i915_request (as dma > > > fence is a base object there). Unless you found a bug in rq field > > > recycling. But access to the dma fence is all tightly controlled so I > > > don't get what leaks. > > > > > >> This patch series stops us using SLAB_TYPESAFE_BY_RCU for i915_request > > >> and, instead, does an RCU-safe slab free via rcu_call(). This should > > >> let us keep most of the perf benefits of slab allocation while avoiding > > >> the bear traps inherent in SLAB_TYPESAFE_BY_RCU. It then removes > > >> support > > >> for SLAB_TYPESAFE_BY_RCU from dma_fence entirely. > > > > > > According to the rationale behind SLAB_TYPESAFE_BY_RCU traditional RCU > > > freeing can be a lot more costly so I think we need a clear > > > justification on why this change is being considered. > > > > The problem is that SLAB_TYPESAFE_BY_RCU requires that we use a sequence > > counter to make sure that we don't grab the reference to a reallocated > > dma_fence. > > > > Updating the sequence counter every time we add a fence now means two > > additions writes and one additional barrier for an extremely hot path. > > The extra overhead of RCU freeing is completely negligible compared to that. > > > > The good news is that I think if we are just a bit more clever about our > > handle we can both avoid the sequence counter and keep > > SLAB_TYPESAFE_BY_RCU around. We're already trying to do handle cleverness as described above. But, as Daniel said and I put in some commit message, we're probably only doing it in about 1/3 of the places we need to be. > You still need a seqlock, or something else that's serving as your > seqlock. dma_fence_list behind a single __rcu protected pointer, with > all subsequent fence pointers _not_ being rcu protected (i.e. full > reference, on every change we allocate might work. Which is a very > funny way of implementing something like a seqlock. > > And that only covers dma_resv, you _have_ to do this _everywhere_ in > every driver. Except if you can proof that your __rcu fence pointer > only ever points at your own driver's fences. > > So unless you're volunteering to audit all the drivers, and constantly > re-audit them (because rcu only guaranteeing type-safety but not > actually preventing use-after-free is very unusual in the kernel) just > fixing dma_resv doesn't solve the problem here at all. > > > But this needs more code cleanup and abstracting the sequence counter > > usage in a macro. > > The other thing is that this doesn't even make sense for i915 anymore. I'm not sure I'd go that far. Yes, we've got the ULLS hack but i915_request is going to stay around for a while. What's really overblown here is the bazillions of requests. GL drivers submit tens or maybe 100ish batches per frame. Media has to ping-pong a bit more but it should still be < 1000/second. If we're really dma_fence_release-bound, we're in a microbenchmark. --Jason > The solution to the "userspace wants to submit bazillion requests" > problem is direct userspace submit. Current hw doesn't have userspace > ringbuffer, but we have a pretty clever trick in the works to make > this possible with current hw, essentially by submitting a CS that > loops on itself, and then inserting batches into this "ring" by > latching a conditional branch in this CS. It's not pretty, but it gets > the job done and outright removes the need for plaid mode throughput > of i915_request dma fences. > -Daniel > > > > > Regards, > > Christian. > > > > > > > > > > Regards, > > > > > > Tvrtko > > > > > >> > > >> Note: The last patch is labled DONOTMERGE. This was at Daniel Vetter's > > >> request as we may want to let this bake for a couple releases before we > > >> rip out dma_fence_get_rcu_safe entirely. > > >> > > >> Signed-off-by: Jason Ekstrand <jason@xxxxxxxxxxxxxx> > > >> Cc: Jon Bloomfield <jon.bloomfield@xxxxxxxxx> > > >> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> > > >> Cc: Christian König <christian.koenig@xxxxxxx> > > >> Cc: Dave Airlie <airlied@xxxxxxxxxx> > > >> Cc: Matthew Auld <matthew.auld@xxxxxxxxx> > > >> Cc: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxxxx> > > >> > > >> Jason Ekstrand (5): > > >> drm/i915: Move intel_engine_free_request_pool to i915_request.c > > >> drm/i915: Use a simpler scheme for caching i915_request > > >> drm/i915: Stop using SLAB_TYPESAFE_BY_RCU for i915_request > > >> dma-buf: Stop using SLAB_TYPESAFE_BY_RCU in selftests > > >> DONOTMERGE: dma-buf: Get rid of dma_fence_get_rcu_safe > > >> > > >> drivers/dma-buf/dma-fence-chain.c | 8 +- > > >> drivers/dma-buf/dma-resv.c | 4 +- > > >> drivers/dma-buf/st-dma-fence-chain.c | 24 +--- > > >> drivers/dma-buf/st-dma-fence.c | 27 +--- > > >> drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 4 +- > > >> drivers/gpu/drm/i915/gt/intel_engine_cs.c | 8 -- > > >> drivers/gpu/drm/i915/i915_active.h | 4 +- > > >> drivers/gpu/drm/i915/i915_request.c | 147 ++++++++++++---------- > > >> drivers/gpu/drm/i915/i915_request.h | 2 - > > >> drivers/gpu/drm/i915/i915_vma.c | 4 +- > > >> include/drm/drm_syncobj.h | 4 +- > > >> include/linux/dma-fence.h | 50 -------- > > >> include/linux/dma-resv.h | 4 +- > > >> 13 files changed, 110 insertions(+), 180 deletions(-) > > >> > > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch