Re: [PATCH 2/2] drm/i915: Recover all available ringbuffer space following reset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes:

> On Fri, Oct 23, 2015 at 02:07:35PM +0300, Mika Kuoppala wrote:
>> Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes:
>> 
>> > Having flushed all requests from all queues, we know that all
>> > ringbuffers must now be empty. However, since we do not reclaim
>> > all space when retiring the request (to prevent HEADs colliding
>> > with rapid ringbuffer wraparound) the amount of available space
>> > on each ringbuffer upon reset is less than when we start. Do one
>> > more pass over all the ringbuffers to reset the available space
>> >
>> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
>> > Cc: Arun Siluvery <arun.siluvery@xxxxxxxxxxxxxxx>
>> > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxx>
>> > Cc: Dave Gordon <david.s.gordon@xxxxxxxxx>
>> > ---
>> >  drivers/gpu/drm/i915/i915_gem.c         | 14 ++++++++++++++
>> >  drivers/gpu/drm/i915/intel_lrc.c        |  1 +
>> >  drivers/gpu/drm/i915/intel_ringbuffer.c | 13 ++++++++++---
>> >  drivers/gpu/drm/i915/intel_ringbuffer.h |  2 ++
>> >  4 files changed, 27 insertions(+), 3 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> > index 41263cd4170c..3a42c350fec9 100644
>> > --- a/drivers/gpu/drm/i915/i915_gem.c
>> > +++ b/drivers/gpu/drm/i915/i915_gem.c
>> > @@ -2738,6 +2738,8 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
>> >  static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
>> >  					struct intel_engine_cs *ring)
>> >  {
>> > +	struct intel_ringbuffer *buffer;
>> > +
>> >  	while (!list_empty(&ring->active_list)) {
>> >  		struct drm_i915_gem_object *obj;
>> >  
>> > @@ -2783,6 +2785,18 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
>> >  
>> >  		i915_gem_request_retire(request);
>> >  	}
>> > +
>> > +	/* Having flushed all requests from all queues, we know that all
>> > +	 * ringbuffers must now be empty. However, since we do not reclaim
>> > +	 * all space when retiring the request (to prevent HEADs colliding
>> > +	 * with rapid ringbuffer wraparound) the amount of available space
>> > +	 * upon reset is less than when we start. Do one more pass over
>> > +	 * all the ringbuffers to reset last_retired_head.
>> > +	 */
>> > +	list_for_each_entry(buffer, &ring->buffers, link) {
>> > +		buffer->last_retired_head = buffer->tail;
>> > +		intel_ring_update_space(buffer);
>> > +	}
>> 
>> This is all in vain as the i915_gem_context_reset() ->
>> intel_lr_context_reset still sets head and tail to zero.
>> 
>> So your last_retired_head will still dangle in a pre-reset
>> world when the rest of the ringbuf items will be set to post
>> reset world.
>
> It's only setting that so that we computed the full ring space as
> available and then we set last_retired_head back to -1. So what's
> dangling?
> -Chris

My understanding of the ringbuffer code was dandling. It is all
clear now. We set head = tail and thus reset the ring space to full.

References: https://bugs.freedesktop.org/show_bug.cgi?id=91634

should be added as this very likely fixes that one.

Reviewed-by: Mika Kuoppala <mika.kuoppala@xxxxxxxxx>

>
> -- 
> Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux