Quoting Thomas Hellström (2023-08-22 19:21:32) > This series adds a flag at VM_BIND time to pin the memory backing a VMA. > Initially this is needed for long-running workloads on hardware that > neither support mid-thread preemption nor pagefaults, since without it > the userptr MMU notifier will wait for preemption until preemption times > out. >From terminology perspective we have a lot of folks in the userspace and kernel developers who have come to understand pinned memory as something that is locked in place while a dependent context is active on the hardware. And that has been related to lack of page-fault support. As here the plan is to go a step forward and never move that memory, would it be worthy to call such memory LOCKED (would also align with CPU side). And per my understanding the aspiration is to keep supporting locking memory in place (within sysadmin configured limits) even if page-faults will become de-facto usage. So, in short, should we do s/pinned/locked/, to avoid terminology confusion between new and old drivers which userspace may have to deal from same codebase? Regards, Joonas > > Moving forward this could be supported also for bo-backed VMAs given > a proper accounting takes place. A sysadmin could then optionally configure > a system to be optimized for dealing with a single GPU application > at a time. > > The series will be followed up with an igt series to exercise the uAPI. > > v2: > - Address review comments by Matthew Brost. > > Thomas Hellström (4): > drm/xe/vm: Use onion unwind for xe_vma_userptr_pin_pages() > drm/xe/vm: Implement userptr page pinning > drm/xe/vm: Perform accounting of userptr pinned pages > drm/xe/uapi: Support pinning of userptr vmas > > drivers/gpu/drm/xe/xe_vm.c | 194 ++++++++++++++++++++++++------- > drivers/gpu/drm/xe/xe_vm.h | 9 ++ > drivers/gpu/drm/xe/xe_vm_types.h | 14 +++ > include/uapi/drm/xe_drm.h | 18 +++ > 4 files changed, 190 insertions(+), 45 deletions(-) > > -- > 2.41.0 >