[PATCH 4/4] drm/i915: Make next_seqno debugs entry to use i915_gem_seqno_init

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Wilson <chris at chris-wilson.co.uk> writes:

> On Tue, 18 Dec 2012 16:32:21 +0200, Mika Kuoppala <mika.kuoppala at linux.intel.com> wrote:
>> Chris Wilson <chris at chris-wilson.co.uk> writes:
>> 
>> > On Mon, 17 Dec 2012 18:44:26 +0200, Mika Kuoppala <mika.kuoppala at linux.intel.com> wrote:
>> >> Hardware status page needs to have proper seqno set
>> >> as our initial seqno can be arbitrary. If initial seqno is close
>> >> to wrap boundary on init and i915_seqno_passed() (31bit space)
>> >> refers to hw status page which contains zero, errorneous result
>> >> will be returned.
>> >> 
>> >> References: https://bugs.freedesktop.org/show_bug.cgi?id=58230
>> >> Signed-off-by: Mika Kuoppala <mika.kuoppala at intel.com>
>> >
>> > Looks good baring the last chunk.
>> >
>> >
>> >> @@ -1435,8 +1439,18 @@ int intel_ring_handle_seqno_wrap(struct intel_ring_buffer *ring)
>> >>  	 * post-wrap semaphore waits completing immediately. Clear them. */
>> >>  	update_mboxes(ring, ring->signal_mbox[0]);
>> >>  	update_mboxes(ring, ring->signal_mbox[1]);
>> >> +	intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
>> >> +	intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
>> >> +	intel_ring_emit(ring, seqno);
>> >> +	intel_ring_emit(ring, MI_USER_INTERRUPT);
>> >>  	intel_ring_advance(ring);
>> >>  
>> >> +	/* Wait until mboxes have actually cleared before pushing
>> >> +	 * anything to the rings */
>> >> +	ret = ring_wait_for_space(ring, ring->size - 8);
>> >> +	if (ret)
>> >> +		return ret;
>> >
>> > I don't this is well justified. Can you clearly explain the situation
>> > where it is required?
>> 
>> As the ring_add_request can it self cause seqno wrap due to
>> intel_ring_begin, and the fact that it will update the *other* rings
>> mboxes, we need to wait until all the rings have proceed with 
>> clearing the mboxes.
>
> What? What ring_add_request? The entire GPU is idle at this point.

Yes, my explanation is bogus as intel_ring_begin can't wrap in
gen6_add_request. The olr is already set before going in there.
  
>> > How about if we assert that the ring is idle, and just blitz the
>> > registers and hws rather than go through the ring?
>> 
>> I have tried this but failed. I think the problem is ring_add_request.
>> It will still inject seqnos before the wrap boundary if intel_ring_begin
>> in itself just wrapped. This is why we need to clear mboxes and set 
>> the hws page through rings.
>
> There are no outstanding requests at this point. The purpose of the loop
> over all the rings calling idle in i915_gem_seqno_handle_wrap() is to
> make sure that not only is the GPU entirely idle, but all ring->olr are
> reset to 0 before we then reset the hw state on each ring.

My initial attempt write mboxes and hws directly had an error in it.
I wrote seqno to mboxes also. Correct approach is to set mboxes to zero
and set hws page value to seqno. I will retry this approach to see if
the syncs will fail.

>> I have a patch which allocates seqnos explicitly early in
>> i915_gem_do_execbuffer, gets rid of outstanding_lazy_request and 
>> related i915_gem_check_olr completely thus making the wrap handling much more 
>> simpler as we don't need to be careful in ring_sync nor ring_add_request
>> as no cross wrap boundary stuff can no longer happen. But I got
>> the impression that you don't like this approach.
>
> No. Because requests need to be associated with anything that touches
> the ring, which may themselves not be associated with an execbuffer. The
> desire is to use the rings for more pipelining of state changes, not
> less. The request is our most basic bookkeeping element of the ring,
> they track how much of the ring we have used and provide for a unified
> wait mechanism.

Thanks for explaining the reasoning.

> -Chris
>
> -- 
> Chris Wilson, Intel Open Source Technology Centre


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux