Am 01.11.2017 um 17:15 schrieb Michel Dänzer: > From: Michel Dänzer <michel.daenzer at amd.com> > > Fixes a use-after-free due to a race condition in > ttm_bo_cleanup_refs_and_unlock, which allows one task to reserve a BO > and destroy its ttm_resv while another task is waiting for it to signal > in reservation_object_wait_timeout_rcu. > > Fixes: 0d2bd2ae045d "drm/ttm: fix memory leak while individualizing BOs" > Signed-off-by: Michel Dänzer <michel.daenzer at amd.com> Good idea, but one thing we should probably change. > --- > drivers/gpu/drm/ttm/ttm_bo.c | 13 +++---------- > 1 file changed, 3 insertions(+), 10 deletions(-) > > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c > index 379ec41d2c69..a19a0ebf32ac 100644 > --- a/drivers/gpu/drm/ttm/ttm_bo.c > +++ b/drivers/gpu/drm/ttm/ttm_bo.c > @@ -150,8 +150,7 @@ static void ttm_bo_release_list(struct kref *list_kref) > ttm_tt_destroy(bo->ttm); > atomic_dec(&bo->glob->bo_count); > dma_fence_put(bo->moving); > - if (bo->resv == &bo->ttm_resv) > - reservation_object_fini(&bo->ttm_resv); > + reservation_object_fini(&bo->ttm_resv); When we always call reservation_object_fini() here we should probably also always call reservation_object_init() in ttm_bo_init_reserved() to make sure the object is always initialized. This way we can also remove the call to reservation_object_init() in ttm_bo_individualize_resv(). Regards, Christian. > mutex_destroy(&bo->wu_mutex); > if (bo->destroy) > bo->destroy(bo); > @@ -406,10 +405,8 @@ static int ttm_bo_individualize_resv(struct ttm_buffer_object *bo) > BUG_ON(!reservation_object_trylock(&bo->ttm_resv)); > > r = reservation_object_copy_fences(&bo->ttm_resv, bo->resv); > - if (r) { > + if (r) > reservation_object_unlock(&bo->ttm_resv); > - reservation_object_fini(&bo->ttm_resv); > - } > > return r; > } > @@ -457,10 +454,8 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo) > if (reservation_object_test_signaled_rcu(&bo->ttm_resv, true)) { > ttm_bo_del_from_lru(bo); > spin_unlock(&glob->lru_lock); > - if (bo->resv != &bo->ttm_resv) { > + if (bo->resv != &bo->ttm_resv) > reservation_object_unlock(&bo->ttm_resv); > - reservation_object_fini(&bo->ttm_resv); > - } > > ttm_bo_cleanup_memtype_use(bo); > return; > @@ -560,8 +555,6 @@ static int ttm_bo_cleanup_refs_and_unlock(struct ttm_buffer_object *bo, > } > > ttm_bo_del_from_lru(bo); > - if (!list_empty(&bo->ddestroy) && (bo->resv != &bo->ttm_resv)) > - reservation_object_fini(&bo->ttm_resv); > list_del_init(&bo->ddestroy); > kref_put(&bo->list_kref, ttm_bo_ref_bug); >