On Sat, May 19, 2012 at 2:56 PM, Daniel Vetter <daniel@xxxxxxxx> wrote: > On Thu, May 17, 2012 at 03:41:30PM +0100, Dave Airlie wrote: >> On Wed, May 16, 2012 at 10:22 PM, <j.glisse@xxxxxxxxx> wrote: >> > From: Jerome Glisse <jglisse@xxxxxxxxxx> >> > >> > This try to identify the faulty user command stream that caused >> > lockup. If it finds one it create big blob that contains all >> > information needed to replay the faulty command stream. >> >> Can you state what exactly is going to end up in the dump? >> >> ring? IB? what about vertex buffers or index buffers, the thing is >> should we be concentrating on replay >> or just disecting the contents of the ring/IB for stupid things. > > We just dump the batchbuffer and ignore all indirect state objects. > Additionally we dump the instruction rings, associated hw state and the > current hw state of crtcs (for pageflip/scanline wait related hangs). It's > mostly good for hangs where the kernel screwed up things royally. For > repeatable hangs (even due to hw quirks) the mesa team mostly switched to > replaying apitraces, and also replaying them through our internal hw > simulator (hence the recent set of patches to add AUB trace dumping to > libdrm-intel). > -Daniel > -- > Daniel Vetter > Mail: daniel@xxxxxxxx > Mobile: +41 (0)79 365 57 48 I have the feeling that for the lockup we get we often can't tell which app (GL, X or some X client) trigger lockup. That's why i think having a complete dump in kernel is usefull for us. Cheers, Jerome _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel