Re: [PATCH v10 2/4] drm/doc: Document device wedged event

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 28, 2025 at 01:38:09PM +0200, Pekka Paalanen wrote:
> On Tue, 28 Jan 2025 11:37:53 +0200
> Raag Jadav <raag.jadav@xxxxxxxxx> wrote:
> 
> > On Mon, Jan 27, 2025 at 12:23:28PM +0200, Pekka Paalanen wrote:
> > > On Wed, 22 Jan 2025 07:22:25 +0200
> > > Raag Jadav <raag.jadav@xxxxxxxxx> wrote:
> > >   
> > > > On Tue, Jan 21, 2025 at 02:14:56AM +0100, Xaver Hugl wrote:  
> > > > > > +It is the responsibility of the consumer to make sure that the device or
> > > > > > +its resources are not in use by any process before attempting recovery.    
> > > > > I'm not convinced this is actually doable in practice, outside of
> > > > > killing all apps that aren't the one trying to recover the GPU.
> > > > > Is this just about not crashing those processes if they don't handle
> > > > > GPU hotunplugs well, about leaks, or something else?    
> > > > 
> > > > Correct, all of it. And since the compositor is in charge of device resources,
> > > > this way it atleast has the opportunity to recover the device and recreate
> > > > context without all the userspace violence.  
> > > 
> > > Hi Raag,
> > > 
> > > sorry, I haven't followed this series, so I wonder, why should
> > > userspace be part of recovering the device? Why doesn't the kernel
> > > automatically load a new driver instance with a new DRM device node?  
> > 
> > There are things like bus level reset (PCI SBR) and re-enumeration that are
> > not possible from driver context (or atleast I'm not aware of it), so a new
> > instance is just as useful/less as the old one.
> 
> Ok, "not possible from driver context" is a key revelation. I wonder if
> starting an overview section with that in the documentation would help
> getting the right mindset.

Not "not possible" in a literal sense, but rather allowing something that
drastic and convoluted that is probably beyond the scope of the driver.

> Did I miss that somewhere?

The first two paragraphs are meant as an introduction. Let me know if
something's not translating so well.

> I thought bus level reset meant resetting the PCI device by some bus
> API. Clearly mistaken, I suppose you mean resetting the whole bus
> including everything on it?

I'm no PCI expert but yes, it is atleast my understanding.

...

> > > (More important for userspace would be know if dmabuf fds remain
> > > pointing to valid memory retaining its contents or if the contents are
> > > lost. Userspace cannot tell which device a dmabuf originates from,
> > > AFAIK, so this would need to be added in the generic dmabuf UAPI.)  
> > 
> > Not sure if I understand, perhaps Christian can shed some light here.
> 
> A system might have multiple GPUs, and one GPU going down may leave all
> the rest working as usual. A Wayland compositor would want to tell the
> difference between still-good and possibly or definitely lost dmabufs
> it received from its clients.

Usually 'DEVNAME=' and 'DEVPATH=' values refer to the device that generates
the event.

> But this is off-topic in this thread I believe, nothing to the series
> at hand.

...

> > > > > > +be unmapped and file descriptors should be closed to prevent leaks.    
> > > > > Afaiu from a userspace POV, a rebind is just like a GPU hotunplug +
> > > > > hotplug with matching "remove" and "add" udev events. As long as the
> > > > > application cleans up resources related to the device when it receives
> > > > > the event, there should be no leaks with a normal hotunplug... Is this
> > > > > different enough that we can't have the same expectations?    
> > > > 
> > > > The thing about "remove" event is that it is generated *after* we opt for an
> > > > unbind, and at that point it might be already too late if userspace doesn't
> > > > get enough time to clean things up while the device is removed with a live
> > > > client resulting in unknown consequences.
> > > > 
> > > > The idea here is to clean things up *before* we opt for an unbind leaving
> > > > no room for side effects.  
> > > 
> > > Something here feels fragile. There should not be a deadline for
> > > userspace to finish cleaning up. What was described for KMS device nodes
> > > in this same document seems like a more reliable approach: keep the
> > > dead driver instance around until userspace has closed all references
> > > to it. The device node could be removed earlier.  
> > 
> > I'm not sure if I'm following here. The driver instance will exist as long
> > as the dead device exists, which the consumer can remove if/when it chooses
> > to trigger an unbind from userspace. There is no deadline for it.
> 
> I was going by your words: "it might be already too late if userspace
> doesn't get enough time to clean things up".

The idea here is to completely detach kernel and userspace *before* moving
forward with the recovery.

> > The consumer can choose to rely on hotplug events if it wishes, but the point
> > here is that it doesn't guarantee a clean recovery in all cases.
> 
> Clearly I don't understand the whole picture here. No worries,
> nevermind.

Less moving parts comes with more chances of success.

Raag



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux