Re: [PATCH v2 1/2] drm: Add GPU reset sysfs event

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 16, 2022 at 4:48 AM Pekka Paalanen <ppaalanen@xxxxxxxxx> wrote:
>
> On Tue, 15 Mar 2022 10:54:38 -0400
> Alex Deucher <alexdeucher@xxxxxxxxx> wrote:
>
> > On Mon, Mar 14, 2022 at 11:26 AM Pekka Paalanen <ppaalanen@xxxxxxxxx> wrote:
> > >
> > > On Mon, 14 Mar 2022 10:23:27 -0400
> > > Alex Deucher <alexdeucher@xxxxxxxxx> wrote:
> > >
> > > > On Fri, Mar 11, 2022 at 3:30 AM Pekka Paalanen <ppaalanen@xxxxxxxxx> wrote:
> > > > >
> > > > > On Thu, 10 Mar 2022 11:56:41 -0800
> > > > > Rob Clark <robdclark@xxxxxxxxx> wrote:
> > > > >
> > > > > > For something like just notifying a compositor that a gpu crash
> > > > > > happened, perhaps drm_event is more suitable.  See
> > > > > > virtio_gpu_fence_event_create() for an example of adding new event
> > > > > > types.  Although maybe you want it to be an event which is not device
> > > > > > specific.  This isn't so much of a debugging use-case as simply
> > > > > > notification.
> > > > >
> > > > > Hi,
> > > > >
> > > > > for this particular use case, are we now talking about the display
> > > > > device (KMS) crashing or the rendering device (OpenGL/Vulkan) crashing?
> > > > >
> > > > > If the former, I wasn't aware that display device crashes are a thing.
> > > > > How should a userspace display server react to those?
> > > > >
> > > > > If the latter, don't we have EGL extensions or Vulkan API already to
> > > > > deliver that?
> > > > >
> > > > > The above would be about device crashes that directly affect the
> > > > > display server. Is that the use case in mind here, or is it instead
> > > > > about notifying the display server that some application has caused a
> > > > > driver/hardware crash? If the latter, how should a display server react
> > > > > to that? Disconnect the application?
> > > > >
> > > > > Shashank, what is the actual use case you are developing this for?
> > > > >
> > > > > I've read all the emails here so far, and I don't recall seeing it
> > > > > explained.
> > > > >
> > > >
> > > > The idea is that a support daemon or compositor would listen for GPU
> > > > reset notifications and do something useful with them (kill the guilty
> > > > app, restart the desktop environment, etc.).  Today when the GPU
> > > > resets, most applications just continue assuming nothing is wrong,
> > > > meanwhile the GPU has stopped accepting work until the apps re-init
> > > > their context so all of their command submissions just get rejected.
> > > >
> > > > > Btw. somewhat relatedly, there has been work aiming to allow
> > > > > graceful hot-unplug of DRM devices. There is a kernel doc outlining how
> > > > > the various APIs should react towards userspace when a DRM device
> > > > > suddenly disappears. That seems to have some overlap here IMO.
> > > > >
> > > > > See https://www.kernel.org/doc/html/latest/gpu/drm-uapi.html#device-hot-unplug
> > > > > which also has a couple pointers to EGL and Vulkan APIs.
> > > >
> > > > The problem is most applications don't use the GL or VK robustness
> > > > APIs.
> > >
> > > Hi,
> > >
> > > how would this new event help with that?
> >
> > This event would provide notification that a GPU reset occurred.
> >
> > >
> > > I mean, yeah, there could be a daemon that kills those GPU users, but
> > > then what? You still lose any unsaved work, and may need to manually
> > > restart them.
> > >
> > > Is the idea that it is better to have the app crash and disappear than
> > > to look like it froze while it otherwise still runs?
> >
> > Yes.
>
> Ok. That was just a wild guess.
>
> >  The daemon could also send the user some sort of notification
> > that a GPU reset occurred.
> >
> > >
> > > If some daemon or compositor goes killing apps that trigger GPU resets,
> > > then how do we stop that for an app that actually does use the
> > > appropriate EGL or Vulkan APIs to detect and remedy that situation
> > > itself?
> >
> > I guess the daemon could keep some sort of whitelist.  OTOH, very few
> > if any applications, especially games actually support these
> > extensions.
>
> I would think that a white-list is a non-starter due to the maintenance
> nightmare and the "wrong by default" design for well behaving programs.
>
> I am not a fan of optimising for broken software. I understand that
> with games this is routine, but we're talking about everything here,
> not just games. Games cannot be fixed, but the rest could if the
> problem was not sweeped under the rug. It's like the inverse of the
> platform problem.

Fair enough, but it hasn't happened in the last 15 years or so.

>
> > > >  You could use something like that in the compositor, but those
> > > > APIs tend to be focused more on the application itself rather than the
> > > > GPU in general.  E.g., Is my context lost.  Which is fine for
> > > > restarting your context, but doesn't really help if you want to try
> > > > and do something with another application (i.e., the likely guilty
> > > > app).  Also, on dGPU at least, when you reset the GPU, vram is usually
> > > > lost (either due to the memory controller being reset, or vram being
> > > > zero'd on init due to ECC support), so even if you are not the guilty
> > > > process, in that case you'd need to re-init your context anyway.
> > >
> > > Why should something like a compositor listen for this and kill apps
> > > that triggered GPU resets, instead of e.g. Mesa noticing that in the app
> > > and killing itself? Mesa in the app would know if robustness API is
> > > being used.
> >
> > That's another possibility, but it doesn't handle the case where the
> > compositor doesn't support any sort of robustness extension so if the
> > GPU was reset, you'd lose your desktop anyway even if the app kept
> > running.
>
> Why does that matter?
>
> A GPU reset happens when it happens. If a compositor does not use
> robustness extensions, it's as good as dead anyway, right?

Right.  Same with the application that supports robustness.  If the
app supports robustness, but the compositor does not, the app is dead
anyway if the compositor dies.  So while there may be an application
today that supports robustness, it is kind of pointless because
nothing else does and so if it manages to recover, nothing else it
relies on does.

>
> Killing a compositor from inside in Mesa if it doesn't use robustness
> might be better than leaving the compositor running blind - assuming
> the compositor does not quit itself after seeing crucial EGL/Vulkan
> calls failing.

I'm not sure I follow the second part of this statement.  I guess
having mesa kill applications that don't support robustness is fine,
but I don't really see it as much better than the status quo.  Today
most apps just continue to try and run until, possibly dying
eventually due to undefined state along the way.  If mesa kills them,
they are killed.  The end result is the same, the user loses their
desktop.

>
> > >
> > > Would be really nice to have the answers to all these questions to be
> > > collected and reiterated in the next version of this proposal.
> >
> > The idea is to provide the notification of a GPU reset.  What the
> > various desktop environments or daemons do with it is up to them.  I
> > still think there is value in a notification even if you don't kill
> > apps or anything like that.  E.g., you can have a daemon running that
> > gets notified and logs the error, collects debug info, sends an email,
> > etc.
>
> With new UAPI comes the demand of userspace proof, not hand-waving. You
> would not be proposing this new interface if you didn't have use cases
> in mind, even just one. You have to document what you imagine the new
> thing to be used for, so that the appropriateness can be evaluated. If
> the use case is deemed inappropriate for the proposed UAPI, you need to
> find another use case to justify adding the new UAPI. If there is no
> use for the UAPI, it shouldn't be added, right? Adding UAPI and hoping
> someone finds use for it seems backwards to me.

We do have a use case.  It's what I described originally.  There is a
user space daemon (could be a compositor, could be something else)
that runs and listens for GPU reset notifications.  When it receives a
notification, it takes action and kills the guilty app and restarts
the compositer and gathers any relevant data related to the GPU hang
(if possible).  We can revisit this discussion once we have the whole
implementation complete.  Other drivers seem to do similar things
already today via different means (msm using devcoredump, i915 seems
to have its own GPU reset notification mechanism, etc.).  It just
seemed like there was value in having a generic drm GPU reset
notification, but maybe not yet.

Alex

>
>
> Thanks,
> pq



[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux