With the current deferred-submission model, if a problem arises part-way through the insertion of instructions into the ringbuffer (e.g. due to one of the begin() calls finding there's not enough space), we avoid sending the incomplete sequence to the h/w; but currently have no means of undoing the work so far, which will lead to undefined behaviour when the next batch is submitted (probably TDR will trigger a reset first, though, and clean up the ring state). A future idea is to move to an atomic-submission model, where all the space required for a batch submission is reserved up front, and in the event of failure partway through, the work can be abandoned without side-effects. This will be required for the forthcoming GPU scheduler (specifically, for preemption). To support this, we allow nested begin/advance pairs. Specifically, the outermost pair defines the total space reservation; inner pairs can be nested ad lib, but all inner reservations at any level must fit entirely within the outermost one. Thus, this is permitted: begin(128) - guarantees that up to 128 dwords can now be emitted without waiting for more freespace begin(6) advance begin(10) advance begin(8) advance etc, as long as the total is no more than 128 dwords advance-and-submit The execbuffer code will later be enhanced to use this approach. In the mean time, the traditional single-level begin/advance mechanism remains fully supported. This commit changes only the begin/advance checking code, to permit (but not require) nested begin/advance pairs. Signed-off-by: Dave Gordon <david.s.gordon@xxxxxxxxx> --- drivers/gpu/drm/i915/intel_ringbuffer.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h index a6660c1..68665c7 100644 --- a/drivers/gpu/drm/i915/intel_ringbuffer.h +++ b/drivers/gpu/drm/i915/intel_ringbuffer.h @@ -416,8 +416,17 @@ static inline void __intel_ringbuffer_begin(struct intel_ringbuffer *ringbuf, WARN_ON(nbytes <= 0); if (ringbuf->rsv_level++) { - /* begin() called twice or more without advance() */ - WARN_ON(1); + /* + * A nested reservation; check that it falls entirely + * within the outer block. Don't adjust remaining space. + */ + WARN_ON(ringbuf->rsv_start < 0); + WARN_ON(ringbuf->rsv_start & 7); + WARN_ON(ringbuf->tail & 7); + WARN_ON(ringbuf->tail > ringbuf->effective_size); + WARN_ON(ringbuf->tail > ringbuf->rsv_start + ringbuf->rsv_size); + WARN_ON(ringbuf->tail + nbytes > ringbuf->effective_size); + WARN_ON(ringbuf->tail + nbytes > ringbuf->rsv_start + ringbuf->rsv_size); } else { /* * A new reservation; validate and record the start and @@ -436,7 +445,7 @@ static inline void __intel_ringbuffer_begin(struct intel_ringbuffer *ringbuf, static inline void __intel_ringbuffer_check(struct intel_ringbuffer *ringbuf) { - WARN_ON(ringbuf->rsv_level-- != 1); + WARN_ON(ringbuf->rsv_level-- <= 0); WARN_ON(ringbuf->rsv_start < 0 || ringbuf->rsv_size < 0); WARN_ON(ringbuf->tail & 7); WARN_ON(ringbuf->tail > ringbuf->rsv_start + ringbuf->rsv_size); -- 1.7.9.5 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx