On 12/03/2024 10:23, Christian König wrote:
Am 12.03.24 um 10:30 schrieb Tvrtko Ursulin:
On 12/03/2024 08:59, Christian König wrote:
Am 12.03.24 um 09:51 schrieb Tvrtko Ursulin:
Hi Maira,
On 11/03/2024 10:05, Maíra Canal wrote:
For some applications, such as using huge pages, we might want to
have a
different mountpoint, for which we pass in mount flags that better
match
our usecase.
Therefore, add a new parameter to drm_gem_object_init() that allow
us to
define the tmpfs mountpoint where the GEM object will be created. If
this parameter is NULL, then we fallback to shmem_file_setup().
One strategy for reducing churn, and so the number of drivers this
patch touches, could be to add a lower level drm_gem_object_init()
(which takes vfsmount, call it __drm_gem_object_init(), or
drm__gem_object_init_mnt(), and make drm_gem_object_init() call that
one with a NULL argument.
I would even go a step further into the other direction. The shmem
backed GEM object is just some special handling as far as I can see.
So I would rather suggest to rename all drm_gem_* function which only
deal with the shmem backed GEM object into drm_gem_shmem_*.
That makes sense although it would be very churny. I at least would be
on the fence regarding the cost vs benefit.
Yeah, it should clearly not be part of this patch here.
Also the explanation why a different mount point helps with something
isn't very satisfying.
Not satisfying as you think it is not detailed enough to say driver
wants to use huge pages for performance? Or not satisying as you
question why huge pages would help?
That huge pages are beneficial is clear to me, but I'm missing the
connection why a different mount point helps with using huge pages.
Ah right, same as in i915, one needs to mount a tmpfs instance passing
huge=within_size or huge=always option. Default is 'never', see man 5 tmpfs.
Regards,
Tvrtko