Re: [PATCH drm-misc-next v2 5/7] drm/gpuvm: add an abstraction for a VM / BO combination

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/7/23 10:16, Boris Brezillon wrote:
On Wed,  6 Sep 2023 23:47:13 +0200
Danilo Krummrich <dakr@xxxxxxxxxx> wrote:

@@ -812,15 +967,20 @@ EXPORT_SYMBOL_GPL(drm_gpuva_remove);
  /**
   * drm_gpuva_link() - link a &drm_gpuva
   * @va: the &drm_gpuva to link
+ * @vm_bo: the &drm_gpuvm_bo to add the &drm_gpuva to
   *
- * This adds the given &va to the GPU VA list of the &drm_gem_object it is
- * associated with.
+ * This adds the given &va to the GPU VA list of the &drm_gpuvm_bo and the
+ * &drm_gpuvm_bo to the &drm_gem_object it is associated with.
+ *
+ * For every &drm_gpuva entry added to the &drm_gpuvm_bo an additional
+ * reference of the latter is taken.
   *
   * This function expects the caller to protect the GEM's GPUVA list against
- * concurrent access using the GEMs dma_resv lock.
+ * concurrent access using either the GEMs dma_resv lock or a driver specific
+ * lock set through drm_gem_gpuva_set_lock().
   */
  void
-drm_gpuva_link(struct drm_gpuva *va)
+drm_gpuva_link(struct drm_gpuva *va, struct drm_gpuvm_bo *vm_bo)
  {
  	struct drm_gem_object *obj = va->gem.obj;
@@ -829,7 +989,10 @@ drm_gpuva_link(struct drm_gpuva *va) drm_gem_gpuva_assert_lock_held(obj); - list_add_tail(&va->gem.entry, &obj->gpuva.list);
+	drm_gpuvm_bo_get(vm_bo);

Guess we should WARN if vm_obj->obj == obj, at least.

+	list_add_tail(&va->gem.entry, &vm_bo->list.gpuva);
+	if (list_empty(&vm_bo->list.entry.gem))
+		list_add_tail(&vm_bo->list.entry.gem, &obj->gpuva.list);
  }
  EXPORT_SYMBOL_GPL(drm_gpuva_link);
@@ -840,20 +1003,40 @@ EXPORT_SYMBOL_GPL(drm_gpuva_link);
   * This removes the given &va from the GPU VA list of the &drm_gem_object it is
   * associated with.
   *
+ * This removes the given &va from the GPU VA list of the &drm_gpuvm_bo and
+ * the &drm_gpuvm_bo from the &drm_gem_object it is associated with in case
+ * this call unlinks the last &drm_gpuva from the &drm_gpuvm_bo.
+ *
+ * For every &drm_gpuva entry removed from the &drm_gpuvm_bo a reference of
+ * the latter is dropped.
+ *
   * This function expects the caller to protect the GEM's GPUVA list against
- * concurrent access using the GEMs dma_resv lock.
+ * concurrent access using either the GEMs dma_resv lock or a driver specific
+ * lock set through drm_gem_gpuva_set_lock().
   */
  void
  drm_gpuva_unlink(struct drm_gpuva *va)
  {
  	struct drm_gem_object *obj = va->gem.obj;
+	struct drm_gpuvm_bo *vm_bo;
if (unlikely(!obj))
  		return;
drm_gem_gpuva_assert_lock_held(obj); + vm_bo = __drm_gpuvm_bo_find(va->vm, obj);

Could we add a drm_gpuva::vm_bo field so we don't have to search the
vm_bo here, and maybe drop the drm_gpuva::vm and drm_gpuva::obj fields,
since drm_gpuvm_bo contains both the vm and the GEM object. I know that
means adding an extra indirection + allocation for drivers that don't
want to use drm_gpuva_[un]link(), but I wonder if it's not preferable
over having the information duplicated (with potential mismatch)

I was considering that and I think we can add a drm_gpuva::vm_bo field and
get rid of drm_gpuva::obj. However, I think we need to keep drm_gpuva::vm,
since it is valid for ::obj to be NULL, hence it must be valid for ::vm_bo too.
Null objects are used for sparse mappings / userptr.


+	if (WARN(!vm_bo, "GPUVA doesn't seem to be linked.\n"))
+		return;
+
  	list_del_init(&va->gem.entry);
+
+	/* This is the last mapping being unlinked for this GEM object, hence
+	 * also remove the VM_BO from the GEM's gpuva list.
+	 */
+	if (list_empty(&vm_bo->list.gpuva))
+		list_del_init(&vm_bo->list.entry.gem);
+	drm_gpuvm_bo_put(vm_bo);
  }
  EXPORT_SYMBOL_GPL(drm_gpuva_unlink);





[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux