Re: [RFC 01/12] drm/i915: Expose list of clients in sysfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/03/2020 00:13, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2020-03-09 23:26:34)

On 09/03/2020 21:34, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
+struct i915_drm_client *
+i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
+{
+       struct i915_drm_client *client;
+       int ret;
+
+       client = kzalloc(sizeof(*client), GFP_KERNEL);
+       if (!client)
+               return ERR_PTR(-ENOMEM);
+
+       kref_init(&client->kref);
+       client->clients = clients;
+
+       ret = mutex_lock_interruptible(&clients->lock);
+       if (ret)
+               goto err_id;
+       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
+                             xa_limit_32b, &clients->next_id, GFP_KERNEL);

So what's next_id used for that explains having the over-arching mutex?

It's to give out client id's "cyclically" - before I apparently
misunderstood what xa_alloc_cyclic is supposed to do - I thought after
giving out id 1 it would give out 2 next, even if 1 was returned to the
pool in the meantime. But it doesn't, I need to track the start point
for the next search with "next".

Ok. A requirement of the API for the external counter.
I want this to make intel_gpu_top's life easier, so it doesn't have to
deal with id recycling for all practical purposes.

Fair enough. I only worry about the radix nodes and sparse ids :)

I only found in docs that it should be efficient when the data is "densely clustered". And given that does appear based on a tree like structure I thought that means a few clusters of ids should be okay. But maybe in practice we would have more than a few clusters. I guess that could indeed be the case.. hm.. Maybe I could use a list and keep pointer to last entry. When u32 next wraps I reset to list head. Downside is any search for next free id potentially has to walk over one used up cluster. May be passable apart for IGT-type stress tests.

And a peek into xa implementation told me the internal lock is not
protecting "next.

See xa_alloc_cyclic(), seems to cover __xa_alloc_cycle (where *next is
manipulated) under the xa_lock.

Ha, true, not sure how I went past top-level and forgot what's in there. :)

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux