Hi Christian,
On 3/12/24 10:48, Christian König wrote:
Am 12.03.24 um 14:09 schrieb Tvrtko Ursulin:
On 12/03/2024 10:37, Christian König wrote:
Am 12.03.24 um 11:31 schrieb Tvrtko Ursulin:
On 12/03/2024 10:23, Christian König wrote:
Am 12.03.24 um 10:30 schrieb Tvrtko Ursulin:
On 12/03/2024 08:59, Christian König wrote:
Am 12.03.24 um 09:51 schrieb Tvrtko Ursulin:
Hi Maira,
On 11/03/2024 10:05, Maíra Canal wrote:
For some applications, such as using huge pages, we might want
to have a
different mountpoint, for which we pass in mount flags that
better match
our usecase.
Therefore, add a new parameter to drm_gem_object_init() that
allow us to
define the tmpfs mountpoint where the GEM object will be
created. If
this parameter is NULL, then we fallback to shmem_file_setup().
One strategy for reducing churn, and so the number of drivers
this patch touches, could be to add a lower level
drm_gem_object_init() (which takes vfsmount, call it
__drm_gem_object_init(), or drm__gem_object_init_mnt(), and make
drm_gem_object_init() call that one with a NULL argument.
I would even go a step further into the other direction. The
shmem backed GEM object is just some special handling as far as I
can see.
So I would rather suggest to rename all drm_gem_* function which
only deal with the shmem backed GEM object into drm_gem_shmem_*.
That makes sense although it would be very churny. I at least
would be on the fence regarding the cost vs benefit.
Yeah, it should clearly not be part of this patch here.
Also the explanation why a different mount point helps with
something isn't very satisfying.
Not satisfying as you think it is not detailed enough to say
driver wants to use huge pages for performance? Or not satisying
as you question why huge pages would help?
That huge pages are beneficial is clear to me, but I'm missing the
connection why a different mount point helps with using huge pages.
Ah right, same as in i915, one needs to mount a tmpfs instance
passing huge=within_size or huge=always option. Default is 'never',
see man 5 tmpfs.
Thanks for the explanation, I wasn't aware of that.
Mhm, shouldn't we always use huge pages? Is there a reason for a DRM
device to not use huge pages with the shmem backend?
AFAIU, according to b901bb89324a ("drm/i915/gemfs: enable THP"), back
then the understanding was within_size may overallocate, meaning there
would be some space wastage, until the memory pressure makes the thp
code split the trailing huge page. I haven't checked if that still
applies.
Other than that I don't know if some drivers/platforms could have
problems if they have some limitations or hardcoded assumptions when
they iterate the sg list.
Yeah, that was the whole point behind my question. As far as I can see
this isn't driver specific, but platform specific.
I might be wrong here, but I think we should then probably not have that
handling in each individual driver, but rather centralized in the DRM code.
I don't see a point in enabling THP for all shmem drivers. A huge page
is only useful if the driver is going to use it. On V3D, for example,
I only need huge pages because I need the memory contiguously allocated
to implement Super Pages. Otherwise, if we don't have the Super Pages
support implemented in the driver, I would be creating memory pressure
without any performance gain.
Best Regards,
- Maíra
Regards,
Christian.
Te Cc is plenty large so perhaps someone else will have additional
information. :)
Regards,
Tvrtko
I mean it would make this patch here even smaller.
Regards,
Christian.
Regards,
Tvrtko