On Wed, Jun 13, 2012 at 10:05:39PM +0100, Chris Wilson wrote: > On Wed, 13 Jun 2012 20:45:19 +0200, Daniel Vetter <daniel.vetter at ffwll.ch> wrote: > > This is just the minimal patch to disable all this code so that we can > > do decent amounts of QA before we rip it all out. > > > > The complicating thing is that we need to flush the gpu caches after > > the batchbuffer is emitted. Which is past the point of no return where > > execbuffer can't fail any more (otherwise we risk submitting the same > > batch multiple times). > > > > Hence we need to add a flag to track whether any caches associated > > with that ring are dirty. And emit the flush in add_request if that's > > the case. > > > > Note that this has a quite a few behaviour changes: > > - Caches get flushed/invalidated unconditionally. > > - Invalidation now happens after potential inter-ring sync. > > > > I've bantered around a bit with Chris on irc whether this fixes > > anything, and it might or might not. The only thing clear is that with > > these changes it's much easier to reason about correctness. > > > > Also rip out a lone get_next_request_seqno in the execbuffer > > retire_commands function. I've dug around and I couldn't figure out > > why that is still there, with the outstanding lazy request stuff it > > shouldn't be necessary. > > > > v2: Chris Wilson complained that I also invalidate the read caches > > when flushing after a batchbuffer. Now optimized. > > > > v3: Added some comments to explain the new flushing behaviour. > > > > Cc: Eric Anholt <eric at anholt.net> > > Cc: Chris Wilson <chris at chris-wilson.co.uk> > > Signed-Off-by: Daniel Vetter <daniel.vetter at ffwll.ch> > > This seems to work fine for 2D workloads, so > Reviewed-by: Chris Wilson <chris at chris-wilson.co.uk> Ok, after testing this again on my snb with the context stuff applied I've queued this up for -next. Let's see how well it fares ;-) Thanks for your review. -Daniel -- Daniel Vetter Mail: daniel at ffwll.ch Mobile: +41 (0)79 365 57 48