Re: [PATCH v10 4/6] drm: consider GEM object shared when it is exported

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 12.12.24 um 15:04 schrieb Li, Yunxiang (Teddy):
[Public]

From: Koenig, Christian <Christian.Koenig@xxxxxxx>
Sent: Thursday, December 12, 2024 4:25
Am 11.12.24 um 17:14 schrieb Li, Yunxiang (Teddy):
[Public]

From: Koenig, Christian <Christian.Koenig@xxxxxxx>
Sent: Wednesday, December 11, 2024 10:03 Am 11.12.24 um 15:02 schrieb
Li, Yunxiang (Teddy):
[Public]

From: Koenig, Christian <Christian.Koenig@xxxxxxx>
Sent: Wednesday, December 11, 2024 3:16 Am 10.12.24 um 18:59
schrieb Yunxiang Li:
Tracking the state of a GEM object for shared stats is quite
difficult since the handle_count is managed behind driver's back.
So instead considers GEM object shared the moment it is exported with flink
ioctl.
This makes it work the same to the dma_buf case. Add a callback
for drivers to get notified when GEM object is being shared.
First of all GEM flink is pretty much deprecated, we only have it
for compatibility reasons. So please don't change anything here.

Then flink is not the only way to create multiple handles for a GEM
object. So this here won't handle all cases.

And finally we already have the .open and .close callbacks, which
are called whenever a handle for a GEM object is created/destroyed.
So it shouldn't be necessary in the first place.
For the importing VM the shared stats is automatically correct by
open and close,
but for the exporting VM we need to update the shared stat when the
buffer gets shared, since it is already counted as private there. As
far as I could find, seems like flink ioctl is the only place where
the global name is assigned? The importing side have multiple places
to get the global name, but the exporter always needs to first call
flink to allocate the number right? So hooking into flink and dma-buf should cover
the bases?
It's irrelevant where the global name is assigned. The problem is
that there are more ways to create a new handle for a GEM object than just flink
and DMA-buf.
For example you can just ask a framebuffer to give you a GEM handle
for the currently displayed buffer. See the call to
drm_gem_handle_create() in drm_mode_getfb2_ioctl().

When you make this change here then those GEM handles are not
considered shared any more even if they are and you sooner or later
run into warnings on VM destruction.

I could probably make handle_count work somehow, but it looks like
it's read in a
lot of places without locks so I'm not sure if there will be some race conditions.

The handle count is protected by the object_name_lock of the device.
The
drm_gem_object_is_shared_for_memory_stats() function is pretty much
the only case where we read the value without holding the lock since
that is used only opportunistically.

What you could do is to hook into amdgpu_gem_object_open() and
amdgpu_gem_object_close(), call
drm_gem_object_is_shared_for_memory_stats() and go over all the VMs
the BO belongs to. (See how amdgpu_vm_bo_find() and amdgpu_vm_bo_add
are used).
Then have an additional flag inside amdgpu_bo_va who tells you if a
BO was previously considered shared or private and update the stats
accordingly when that status changes.
But the open and close functions are called outside the object_name_lock right,
so do I regrab the lock in the amdgpu_* functions or I could move the callback into
the lock?

You don't need the object_name_lock for this, the update is just opportunistically.

E.g. you go over all the VMs a BO belongs to and grab the VM spinlock to update
the status in the amdgpu_bo_va structure.

It can in theory be that a concurrent process modifies handle_count at the same
time you update the VM status, but that doesn't matter since this modification will
update the status once more again.
Wouldn't there be an ordering concern? Say the handle count goes to 2 in one thread and another thread drop it down to 1 right after, the two loops run concurrently. Wouldn't it be possible that some VM get updated by the second thread first and then the first thread and be left in the "shared" state?

No, because the update always updates to the current state. E.g. it's irrelevant if open or close calls the update, key point is that we grab the current state and update to that one.

What in theory can happen is that the current state isn't up to date because of missing CPU barriers, but since we are protecting the update with a spinlock that should be irrelevant.

Christian.


Teddy

Regards,
Christian.

Regards,
Christian.

Regards,
Christian.

Signed-off-by: Yunxiang Li <Yunxiang.Li@xxxxxxx>

CC: dri-devel@xxxxxxxxxxxxxxxxxxxxx
---
     drivers/gpu/drm/drm_gem.c   |  3 +++
     drivers/gpu/drm/drm_prime.c |  3 +++
     include/drm/drm_gem.h       | 12 +++++++++++-
     3 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index d4bbc5d109c8b..1ead11de31f6b 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -854,6 +854,9 @@ drm_gem_flink_ioctl(struct drm_device *dev,
void
*data,
                       goto err;

               obj->name = ret;
+
+           if (obj->funcs->shared)
+                   obj->funcs->shared(obj);
       }

       args->name = (uint64_t) obj->name; diff --git
a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index
0e3f8adf162f6..336d982d69807 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -406,6 +406,9 @@ static struct dma_buf
*export_and_register_object(struct
drm_device *dev,
       obj->dma_buf = dmabuf;
       get_dma_buf(obj->dma_buf);

+   if (obj->funcs->shared)
+           obj->funcs->shared(obj);
+
       return dmabuf;
     }

diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index
da11c16e212aa..8c5ffcd485752 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -122,6 +122,16 @@ struct drm_gem_object_funcs {
        */
       struct dma_buf *(*export)(struct drm_gem_object *obj, int
flags);

+   /**
+    * @shared:
+    *
+    * Callback when GEM object becomes shared, see also
+    * drm_gem_object_is_shared_for_memory_stats
+    *
+    * This callback is optional.
+    */
+   void (*shared)(struct drm_gem_object *obj);
+
       /**
        * @pin:
        *
@@ -568,7 +578,7 @@ int drm_gem_evict(struct drm_gem_object *obj);
      */
     static inline bool
drm_gem_object_is_shared_for_memory_stats(struct
drm_gem_object *obj)
     {
-   return (obj->handle_count > 1) || obj->dma_buf;
+   return obj->name || obj->dma_buf;
     }

     #ifdef CONFIG_LOCKDEP




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux