Re: [PATCH v2 1/2] drm: Add GPU reset sysfs event

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 17, 2022 at 10:27 AM Daniel Vetter <daniel@xxxxxxxx> wrote:
>
> On Thu, Mar 17, 2022 at 08:40:51AM -0700, Rob Clark wrote:
> > On Thu, Mar 17, 2022 at 2:29 AM Daniel Vetter <daniel@xxxxxxxx> wrote:
> > >
> > > On Thu, Mar 17, 2022 at 08:03:27AM +0100, Christian König wrote:
> > > > Am 16.03.22 um 16:36 schrieb Rob Clark:
> > > > > [SNIP]
> > > > > just one point of clarification.. in the msm and i915 case it is
> > > > > purely for debugging and telemetry (ie. sending crash logs back to
> > > > > distro for analysis if user has crash reporting enabled).. it isn't
> > > > > used for triggering any action like killing app or compositor.
> > > >
> > > > By the way, how does msm it's memory management for the devcoredumps?
> > >
> > > GFP_NORECLAIM all the way. It's purely best effort.
> > >
> > > Note that the fancy new plan for i915 discrete gpu is to only support gpu
> > > crash dumps on non-recoverable gpu contexts, i.e. those that do not
> > > continue to the next batch when something bad happens. This is what vk
> > > wants and also what iris now uses (we do context recovery in userspace in
> > > all cases), and non-recoverable contexts greatly simplify the crash dump
> > > gather: Only thing you need to gather is the register state from hw
> > > (before you reset it), all the batchbuffer bo and indirect state bo (in
> > > i915 you can mark which bo to capture in the CS ioctl) can be captured in
> > > a worker later on. Which for non-recoverable context is no issue, since
> > > subsequent batchbuffers won't trample over any of these things.
> >
> > fwiw, we snapshot everything (cmdstream and bo's marked with dump
> > flag, in addition to hw state) before resuming the GPU, so there is no
> > danger of things being trampled.  After state is captured and GPU
> > reset, we "replay" the submits that were written into the ringbuffer
> > after the faulting submit.  GPU crashes should be a thing you don't
> > need to try to optimize.
>
> Not sure why you think we optimize anything here?
>
> > (At some point, I'd like to use scheduler for the replay, and actually
> > use drm_sched_stop()/etc.. but last time I looked there were still
> > some sched bugs in that area which prevented me from deleting a bunch
> > of code ;-))
>
> Not sure about your hw, but at least on intel replaying tends to just
> result in follow-on fun. And that holds even more so the more complex a
> workload is. This is why vk just dies immediately and does not try to
> replay anything, offloading it to the app. Same with arb robusteness.
> Afaik it's really only media and classic gl which insist that the driver
> stack somehow recover.

At least for us, each submit must be self-contained (ie. not rely on
previous GPU hw state), so in practice replay works out pretty well.
The worst case is subsequent submits from same process fail as well
(if they depended on something that crashing submit failed to write
back to memory.. but in that case they just crash as well and we move
on to the next one.. the recent gens (a5xx+ at least) are pretty good
about quickly detecting problems and giving us an error irq.

BR,
-R

> And recovering from a mess in userspace is a lot simpler than trying to
> pull of the same magic in the kernel. Plus it also helps with a few of the
> dma_fence rules, which is a nice bonus.
> -Daniel
>
> >
> > BR,
> > -R
> >
> > >
> > > And that way you can record the crashdump (or at least the big pieces like
> > > all the indirect state stuff) with GFP_KERNEL.
> > >
> > > msm probably gets it wrong since embedded drivers have much less shrinker
> > > and generally no mmu notifiers going on :-)
> > >
> > > > I mean it is strictly forbidden to allocate any memory in the GPU reset
> > > > path.
> > > >
> > > > > I would however *strongly* recommend devcoredump support in other GPU
> > > > > drivers (i915's thing pre-dates devcoredump by a lot).. I've used it
> > > > > to debug and fix a couple obscure issues that I was not able to
> > > > > reproduce by myself.
> > > >
> > > > Yes, completely agree as well.
> > >
> > > +1
> > >
> > > Cheers, Daniel
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > http://blog.ffwll.ch
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux