Re: [PATCH 14/26] drm/xe/eudebug: implement userptr_vma access

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 12, 2024 at 11:12:39AM +0100, Simona Vetter wrote:
> On Thu, Dec 12, 2024 at 09:49:24AM +0100, Thomas Hellström wrote:
> > On Mon, 2024-12-09 at 16:31 +0100, Simona Vetter wrote:
> > > On Mon, Dec 09, 2024 at 03:03:04PM +0100, Christian König wrote:
> > > > Am 09.12.24 um 14:33 schrieb Mika Kuoppala:
> > > > > From: Andrzej Hajda <andrzej.hajda@xxxxxxxxx>
> > > > > 
> > > > > Debugger needs to read/write program's vmas including
> > > > > userptr_vma.
> > > > > Since hmm_range_fault is used to pin userptr vmas, it is possible
> > > > > to map those vmas from debugger context.
> > > > 
> > > > Oh, this implementation is extremely questionable as well. Adding
> > > > the LKML
> > > > and the MM list as well.
> > > > 
> > > > First of all hmm_range_fault() does *not* pin anything!
> > > > 
> > > > In other words you don't have a page reference when the function
> > > > returns,
> > > > but rather just a sequence number you can check for modifications.
> > > 
> > > I think it's all there, holds the invalidation lock during the
> > > critical
> > > access/section, drops it when reacquiring pages, retries until it
> > > works.
> > > 
> > > I think the issue is more that everyone hand-rolls userptr. Probably
> > > time
> > > we standardize that and put it into gpuvm as an optional part, with
> > > consistent locking, naming (like not calling it _pin_pages when it's
> > > unpinnged userptr), kerneldoc and all the nice things so that we
> > > stop consistently getting confused by other driver's userptr code.
> > > 
> > > I think that was on the plan originally as an eventual step, I guess
> > > time
> > > to pump that up. Matt/Thomas, thoughts?
> > 
> > It looks like we have this planned and ongoing but there are some
> > complications and thoughts.
> > 
> > 1) A drm_gpuvm implementation would be based on vma userptrs, and would
> > be pretty straightforward based on xe's current implementation and, as
> > you say, renaming.
> > 

My thoughts...

Standardize gpuvm userptrs gpuvmas a bit. In Xe I think we basically set
the BO to NULL in the gpuvmas then have some helpers in Xe to determine
if gpuvma is a userptr. I think some this code could be moved into gpuvm
so drivers are doing this in a standard way.

I think NULL bindings also set te BO to NULL too, perhaps we standardize
that too in gpuvm. 

> > 2) Current Intel work to land this on the drm level is based on
> > drm_gpusvm (minus migration to VRAM). I'm not fully sure yet how this
> > will integrate with drm_gpuvm.
> > 

Implement the userptr locking / page collection (i.e. hmm_range_fault
call) on top of gpusvm. Perhaps decouple the current page collection
from drm_gpusvm_range into an embedded struct like drm_gpusvm_devmem.
The plan was to more or less land gpusvm which in on the list addressing
Thomas's feedback before doing the userptr rework on top. 

As of now, different engineer will own this rework. Ofc, with Thomas and
myself providing guidance and welcoming community input. Xe will likely
be the first user of this so if we have to tweak this as more drivers
start to use this, ofc that is fine and will be open to any changes.

> > 3) Christian mentioned a plan to have a common userptr implementation
> > based off drm_exec. I figure that would be bo-based like the amdgpu
> > implemeentation still is. Possibly i915 would be interested in this but
> > I think any VM_BIND based driver would want to use drm_gpuvm /
> > drm_gpusvm implementation, which is also typically O(1), since userptrs
> > are considered vm-local.

I don't think any new driver would want a userptr implementation based
on drm_exec because of having to use BO's and this isn't required if
drm_gpuvm / drm_gpusvm is used which I suspect all new drivers will use.
Sure could be useful for amdgpu / i915 but for Xe we certainly wouldn't
want this nor would a VM bind only driver.

> > 
> > Ideas / suggestions welcome
> 
> So just discussed this a bit with Joonas, and if we use access_remote_vm
> for the userptr access instead of hand-rolling then we really only need
> bare-bones data structure changes in gpuvm, and nothing more. So
> 
> - add the mm pointer to struct drm_gpuvm
> - add a flag indicating that it's a userptr + userspace address to struct
>   drm_gpuva
> - since we already have userptr in drivers I guess there should be any
>   need to adjust the actual drm_gpuvm code to cope with these
> 
> Then with this you can write the access helper using access_remote_vm
> since that does the entire remote mm walking internally, and so there's
> no need to also have all the mmu notifier and locking lifted to gpuvm. But
> it does already give us some great places to put relevant kerneldocs (not
> just for debugging architecture, but userptr stuff in general), which is
> already a solid step forward.
> 
> Plus I think it'd would also be a solid first step that we need no matter
> what for figuring out the questions/options you have above.
> 
> Thoughts?

This seems like it could work with everything I've written above. Maybe
this lives in gpusvm though so we have clear divide where gpuvm is GPU
address space, and gpusvm is CPU address space. Kinda a bikeshed, but
agree in general if we need to access / modify userptrs this lives in
common code.

Do we view this userptr rework as a blocker for EuDebug? My thinking was
we don't as we (Intel) have fully committed to a common userptr
implementation.

FWIW, I really don't think the implementation in this patch and I stated
this may times but that feedback seems to have been ignored yet again.
I'd prefer an open code hmm_range_fault loop for now rather than a new
xe_res_cursor concept that will get thrown away.

Matt

> -Sima
> 
> > 
> > > -Sima
> > > 
> > > > 
> > > > > v2: pin pages vs notifier, move to vm.c (Matthew)
> > > > > v3: - iterate over system pages instead of DMA, fixes iommu
> > > > > enabled
> > > > >      - s/xe_uvma_access/xe_vm_uvma_access/ (Matt)
> > > > > 
> > > > > Signed-off-by: Andrzej Hajda <andrzej.hajda@xxxxxxxxx>
> > > > > Signed-off-by: Maciej Patelczyk <maciej.patelczyk@xxxxxxxxx>
> > > > > Signed-off-by: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
> > > > > Reviewed-by: Jonathan Cavitt <jonathan.cavitt@xxxxxxxxx> #v1
> > > > > ---
> > > > >   drivers/gpu/drm/xe/xe_eudebug.c |  3 ++-
> > > > >   drivers/gpu/drm/xe/xe_vm.c      | 47
> > > > > +++++++++++++++++++++++++++++++++
> > > > >   drivers/gpu/drm/xe/xe_vm.h      |  3 +++
> > > > >   3 files changed, 52 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/drivers/gpu/drm/xe/xe_eudebug.c
> > > > > b/drivers/gpu/drm/xe/xe_eudebug.c
> > > > > index 9d87df75348b..e5949e4dcad8 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_eudebug.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_eudebug.c
> > > > > @@ -3076,7 +3076,8 @@ static int xe_eudebug_vma_access(struct
> > > > > xe_vma *vma, u64 offset_in_vma,
> > > > >   		return ret;
> > > > >   	}
> > > > > -	return -EINVAL;
> > > > > +	return xe_vm_userptr_access(to_userptr_vma(vma),
> > > > > offset_in_vma,
> > > > > +				    buf, bytes, write);
> > > > >   }
> > > > >   static int xe_eudebug_vm_access(struct xe_vm *vm, u64 offset,
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.c
> > > > > b/drivers/gpu/drm/xe/xe_vm.c
> > > > > index 0f17bc8b627b..224ff9e16941 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > > > @@ -3414,3 +3414,50 @@ void xe_vm_snapshot_free(struct
> > > > > xe_vm_snapshot *snap)
> > > > >   	}
> > > > >   	kvfree(snap);
> > > > >   }
> > > > > +
> > > > > +int xe_vm_userptr_access(struct xe_userptr_vma *uvma, u64
> > > > > offset,
> > > > > +			 void *buf, u64 len, bool write)
> > > > > +{
> > > > > +	struct xe_vm *vm = xe_vma_vm(&uvma->vma);
> > > > > +	struct xe_userptr *up = &uvma->userptr;
> > > > > +	struct xe_res_cursor cur = {};
> > > > > +	int cur_len, ret = 0;
> > > > > +
> > > > > +	while (true) {
> > > > > +		down_read(&vm->userptr.notifier_lock);
> > > > > +		if (!xe_vma_userptr_check_repin(uvma))
> > > > > +			break;
> > > > > +
> > > > > +		spin_lock(&vm->userptr.invalidated_lock);
> > > > > +		list_del_init(&uvma->userptr.invalidate_link);
> > > > > +		spin_unlock(&vm->userptr.invalidated_lock);
> > > > > +
> > > > > +		up_read(&vm->userptr.notifier_lock);
> > > > > +		ret = xe_vma_userptr_pin_pages(uvma);
> > > > > +		if (ret)
> > > > > +			return ret;
> > > > > +	}
> > > > > +
> > > > > +	if (!up->sg) {
> > > > > +		ret = -EINVAL;
> > > > > +		goto out_unlock_notifier;
> > > > > +	}
> > > > > +
> > > > > +	for (xe_res_first_sg_system(up->sg, offset, len, &cur);
> > > > > cur.remaining;
> > > > > +	     xe_res_next(&cur, cur_len)) {
> > > > > +		void *ptr = kmap_local_page(sg_page(cur.sgl)) +
> > > > > cur.start;
> > > > 
> > > > The interface basically creates a side channel to access userptrs
> > > > in the way
> > > > an userspace application would do without actually going through
> > > > userspace.
> > > > 
> > > > That is generally not something a device driver should ever do as
> > > > far as I
> > > > can see.
> > > > 
> > > > > +
> > > > > +		cur_len = min(cur.size, cur.remaining);
> > > > > +		if (write)
> > > > > +			memcpy(ptr, buf, cur_len);
> > > > > +		else
> > > > > +			memcpy(buf, ptr, cur_len);
> > > > > +		kunmap_local(ptr);
> > > > > +		buf += cur_len;
> > > > > +	}
> > > > > +	ret = len;
> > > > > +
> > > > > +out_unlock_notifier:
> > > > > +	up_read(&vm->userptr.notifier_lock);
> > > > 
> > > > I just strongly hope that this will prevent the mapping from
> > > > changing.
> > > > 
> > > > Regards,
> > > > Christian.
> > > > 
> > > > > +	return ret;
> > > > > +}
> > > > > diff --git a/drivers/gpu/drm/xe/xe_vm.h
> > > > > b/drivers/gpu/drm/xe/xe_vm.h
> > > > > index 23adb7442881..372ad40ad67f 100644
> > > > > --- a/drivers/gpu/drm/xe/xe_vm.h
> > > > > +++ b/drivers/gpu/drm/xe/xe_vm.h
> > > > > @@ -280,3 +280,6 @@ struct xe_vm_snapshot
> > > > > *xe_vm_snapshot_capture(struct xe_vm *vm);
> > > > >   void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot
> > > > > *snap);
> > > > >   void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct
> > > > > drm_printer *p);
> > > > >   void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
> > > > > +
> > > > > +int xe_vm_userptr_access(struct xe_userptr_vma *uvma, u64
> > > > > offset,
> > > > > +			 void *buf, u64 len, bool write);
> > > > 
> > > 
> > 
> 
> -- 
> Simona Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux