Re: [PATCH v5 1/2] drm/panthor: Expose size of driver internal BO's over fdinfo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 20/12/2024 11:08, Steven Price wrote:
On 19/12/2024 16:30, Mihail Atanassov wrote:


On 18/12/2024 18:18, Adrián Martínez Larumbe wrote:
From: Adrián Larumbe <adrian.larumbe@xxxxxxxxxxxxx>

This will display the sizes of kenrel BO's bound to an open file,
which are
otherwise not exposed to UM through a handle.

The sizes recorded are as follows:
   - Per group: suspend buffer, protm-suspend buffer, syncobjcs
   - Per queue: ringbuffer, profiling slots, firmware interface
   - For all heaps in all heap pools across all VM's bound to an open
file,
   record size of all heap chuks, and for each pool the gpu_context BO
too.

This does not record the size of FW regions, as these aren't bound to a
specific open file and remain active through the whole life of the
driver.

Signed-off-by: Adrián Larumbe <adrian.larumbe@xxxxxxxxxxxxx>
Reviewed-by: Liviu Dudau <liviu.dudau@xxxxxxx>
---

[...]

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/
panthor/panthor_mmu.c
index c39e3eb1c15d..51f6e66df3f5 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1941,6 +1941,41 @@ struct panthor_heap_pool
*panthor_vm_get_heap_pool(struct panthor_vm *vm, bool c
       return pool;
   }
   +/**
+ * panthor_vm_heaps_size() - Calculate size of all heap chunks across
all
+ * heaps over all the heap pools in a VM
+ * @pfile: File.
+ * @status: Memory status to be updated.
+ *
+ * Calculate all heap chunk sizes in all heap pools bound to a VM. If
the VM
+ * is active, record the size as active as well.
+ */
+void panthor_vm_heaps_sizes(struct panthor_file *pfile, struct
drm_memory_stats *status)
+{
+    struct panthor_vm *vm;
+    unsigned long i;
+
+    if (!pfile->vms)
+        return;
+
+    xa_for_each(&pfile->vms->xa, i, vm) {
+        size_t size;
+
+        mutex_lock(&vm->heaps.lock);

Use `scoped_guard` instead?

#include <linux/cleanup.h>

/* ... */

     xa_for_each(...) {
         size_t size;

         scoped_guard(mutex, &vm->heaps.lock) {
             if (!vm->heaps.pool)
                 continue;

             size = panthor_heap_pool_size(vm->heaps.pool);
         }
         /* ... */

I don't believe this actually works. The implementation of scoped_guard
uses a for() loop. So the "continue" will be applied to this (hidden)
internal loop rather than the xa_for_each() loop intended.

Yikes, good call-out! I ought to have checked... I'll make a mental note of that limitation.


An alternative would be:

	xa_for_each(&pfile->vms->xa, i, vm) {
		size_t size = 0;

		mutex_lock(&vm->heaps.lock);
		if (vm->heaps.pool)
			size = panthor_heap_pool_size(vm->heaps.pool);
		mutex_unlock(&vm->heaps.lock);

Well then you can do a:

		scoped_guard(mutex)(&vm->heaps.lock) {
			if (vm->heaps.pool)
				size = panthor_heap_pool_size(vm->heaps.pool);
		}

		/* ;) */


		status->resident += size;
		status->private += size;
		if (vm->as.id >= 0)
			status->active += size;
	}

(relying on size=0 being a no-op for the additions). Although I was
personally also happy with the original - but perhaps that's just
because I'm old and still feel anxious when I see scoped_guard() ;)

Steve


--
Mihail Atanassov <mihail.atanassov@xxxxxxx>




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux