Re: [PATCH 13/26] RFC drm/xe/eudebug: userptr vm pread/pwrite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Christian König (2025-01-29 12:33:52)
> Am 29.01.25 um 09:03 schrieb Joonas Lahtinen:
> > Quoting Christian König (2024-12-20 14:56:14)
> >> Am 20.12.24 um 12:31 schrieb Mika Kuoppala:
> >>> Implement debugger vm access for userptrs.
> >>>
> >>> When bind is done, take ref to current task so that
> >>> we know from which vm the address was bound. Then during
> >>> debugger pread/pwrite we use this target task as
> >>> parameter to access the debuggee vm with access_process_vm().
> >>>
> >>> This is based on suggestions from Thomas, Joonas and Simona.
> >> Yeah that looks much saner to me. I still have a couple of comments on
> >> the general approach, but I'm going to write that up after my vacation.
> > I see you've had some issues with mail servers, so just pinging here to
> > see if any replies have got lost.
> 
> No, I'm just overworked and have like 10 things I need to take care of 
> at the same time :(

Ack.

> > Would be great to reach a consensus on the high level details before
> > spinning off further series addressing the smaller items.
> 
> I would say that attaching debug metadata to the GPU VMA doesn't look 
> like the best design, but if you just do that inside XE it won't affect 
> any other part of the kernel.

It just grew out of convenience of implementation on the side of VM_BIND.

The other alternative would be to maintain a secondary load map in the
kernel in a separate data structure from GPU VMA.

I was actually going to suggest such thing as a common DRM thing: GPU VMA
metadata interface or parallel "GPU loadmap" interface. It'd allow for
userspace tooling to more easier go from GPU EU IP to a module that
was loaded at that address. Kind of a step 0 towards backtrace for GPU.

Can you elaborate on what your concern is with the VMA metadata
attachment?

> My other concern I have is using ptrace_may_access, I would still try to 
> avoid that.
> 
> What if you first grab the DRM render node file descriptor which 
> represents the GPU address space you want to debug with pidfd_getfd() 
> and then either create the eudebug file descriptor from an IOCTL or 
> implement the necessary IOCTLs on the DRM render node directly?
> 
> That would make it unnecessary to export ptrace_may_access.

We're prototyping this. At this point there are some caveats recognized:

1. There is a limitation that you don't get a notification when your
target PID opens a DRM client, but GDB or other application would have
to keep polling for the FDs. I'll have to check with the team how that
would fit to GDB side.

2. Debugging multiple DRM clients (to same GPU) under one PID now
requires separate debugger connections. This may break the way the debugger
locking is currently implemented for the discovery phase to prevent parallel
IOCTL from running. Will have to look into it once we have a working
prototype.

3. Last but not the least, we'll have to compare which LSM security
modules and other conditions checked on the pidfd_getfd() paths for
access restrictions.

Reason for using ptrace_may_access() was to have a clear 1:1 mapping
between user being allowed to ptrace() a PID to control CPU threads and
do debugger IOCTL to control GPU threads. So if user can attach to a PID
with GDB, they would certainly also be able to debug the GPU portion.

If there is divergence, I don't see a big benefit in going to
pidfd_getfd(). We're all the same not even exporting ptrace_may_access()
and YOLO'ing the access check by comparing euid and such which is close
to what ptrace does, but not exactly the same (just like pidfd_getfd()
access checks).

However I believe this would not be a fundamental blocker for the
series? If this would be the only remaining disagreement, I guess we
could just do CAP_ADMIN check initially before we can find something
agreeable.

Regards, Joonas

> 
> Regards,
> Christian.
> 
> >
> > Regards, Joonas
> >
> >> Regards,
> >> Christian.
>




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux