On 8/20/19 4:53 PM, Daniel Vetter wrote:
With nouveau fixed all ttm-using drives have the correct nesting of
mmap_sem vs dma_resv, and we can just lock the buffer.
Assuming I didn't screw up anything with my audit of course.
Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx>
Cc: Christian Koenig <christian.koenig@xxxxxxx>
Cc: Huang Rui <ray.huang@xxxxxxx>
Cc: Gerd Hoffmann <kraxel@xxxxxxxxxx>
Cc: "VMware Graphics" <linux-graphics-maintainer@xxxxxxxxxx>
Cc: Thomas Hellstrom <thellstrom@xxxxxxxxxx>
---
drivers/gpu/drm/ttm/ttm_bo.c | 34 ---------------------------------
drivers/gpu/drm/ttm/ttm_bo_vm.c | 26 +------------------------
include/drm/ttm/ttm_bo_api.h | 1 -
3 files changed, 1 insertion(+), 60 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 20ff56f27aa4..a952dd624b06 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -1954,37 +1954,3 @@ void ttm_bo_swapout_all(struct ttm_bo_device *bdev)
;
}
EXPORT_SYMBOL(ttm_bo_swapout_all);
-
-/**
- * ttm_bo_wait_unreserved - interruptible wait for a buffer object to become
- * unreserved
- *
- * @bo: Pointer to buffer
- */
-int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo)
-{
- int ret;
-
- /*
- * In the absense of a wait_unlocked API,
- * Use the bo::wu_mutex to avoid triggering livelocks due to
- * concurrent use of this function. Note that this use of
- * bo::wu_mutex can go away if we change locking order to
- * mmap_sem -> bo::reserve.
- */
- ret = mutex_lock_interruptible(&bo->wu_mutex);
- if (unlikely(ret != 0))
- return -ERESTARTSYS;
- if (!dma_resv_is_locked(bo->base.resv))
- goto out_unlock;
- ret = dma_resv_lock_interruptible(bo->base.resv, NULL);
- if (ret == -EINTR)
- ret = -ERESTARTSYS;
- if (unlikely(ret != 0))
- goto out_unlock;
- dma_resv_unlock(bo->base.resv);
-
-out_unlock:
- mutex_unlock(&bo->wu_mutex);
- return ret;
-}
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index 76eedb963693..505e1787aeea 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -125,31 +125,7 @@ static vm_fault_t ttm_bo_vm_fault(struct vm_fault *vmf)
&bdev->man[bo->mem.mem_type];
struct vm_area_struct cvma;
- /*
- * Work around locking order reversal in fault / nopfn
- * between mmap_sem and bo_reserve: Perform a trylock operation
- * for reserve, and if it fails, retry the fault after waiting
- * for the buffer to become unreserved.
- */
- if (unlikely(!dma_resv_trylock(bo->base.resv))) {
- if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) {
- if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
- ttm_bo_get(bo);
- up_read(&vmf->vma->vm_mm->mmap_sem);
- (void) ttm_bo_wait_unreserved(bo);
- ttm_bo_put(bo);
- }
-
- return VM_FAULT_RETRY;
- }
-
- /*
- * If we'd want to change locking order to
- * mmap_sem -> bo::reserve, we'd use a blocking reserve here
- * instead of retrying the fault...
- */
I think you should justify why the above code is removed, since the
comments actually outlines what to do if we change locking order.
The code that's removed above is not for adjusting locking orders but to
decrease the mm latency by releasing the mmap_sem while waiting for bo
reserve which in turn might be waiting for GPU. At a minimum we should
have a separate patch with justification.
Note that the caller here ensures locking progress by adjusting the
RETRY flags after a retry.
/Thomas
- return VM_FAULT_NOPAGE;
- }
+ dma_resv_lock(bo->base.resv, NULL);
/*
* Refuse to fault imported pages. This should be handled
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 43c4929a2171..6b50e624e3e2 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -765,7 +765,6 @@ ssize_t ttm_bo_io(struct ttm_bo_device *bdev, struct file *filp,
int ttm_bo_swapout(struct ttm_bo_global *glob,
struct ttm_operation_ctx *ctx);
void ttm_bo_swapout_all(struct ttm_bo_device *bdev);
-int ttm_bo_wait_unreserved(struct ttm_buffer_object *bo);
/**
* ttm_bo_uses_embedded_gem_object - check if the given bo uses the
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel