Re: [PATCH 1/4] drm/i915: Unify execlist and legacy request life-cycles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 09, 2015 at 06:23:50PM +0100, Chris Wilson wrote:
> On Fri, Oct 09, 2015 at 07:18:21PM +0200, Daniel Vetter wrote:
> > On Fri, Oct 09, 2015 at 10:45:35AM +0100, Chris Wilson wrote:
> > > On Fri, Oct 09, 2015 at 11:15:08AM +0200, Daniel Vetter wrote:
> > > > My idea was to create a new request for 3. which gets signalled by the
> > > > scheduler in intel_lrc_irq_handler. My idea was that we'd only create
> > > > these when a ctx switch might occur to avoid overhead, but I guess if we
> > > > just outright delay all requests a notch if need that might work too. But
> > > > I'm really not sure on the implications of that (i.e. does the hardware
> > > > really unlod the ctx if it's idle?), and whether that would fly still with
> > > > the scheduler.
> > > >
> > > > But figuring this one out here seems to be the cornestone of this reorg.
> > > > Without it we can't just throw contexts onto the active list.
> > > 
> > > (Let me see if I understand it correctly)
> > > 
> > > Basically the problem is that we can't trust the context object to be
> > > synchronized until after the status interrupt. The way we handled that
> > > for legacy is to track the currently bound context and keep the
> > > vma->pin_count asserted until the request containing the switch away.
> > > Doing the same for execlists would trivially fix the issue and if done
> > > smartly allows us to share more code (been there, done that).
> > > 
> > > That satisfies me for keeping requests as a basic fence in the GPU
> > > timeline and should keep everyone happy that the context can't vanish
> > > until after it is complete. The only caveat is that we cannot evict the
> > > most recent context. For legacy, we do a switch back to the always
> > > pinned default context. For execlists we don't, but it still means we
> > > should only have one context which cannot be evicted (like legacy). But
> > > it does leave us with the issue that i915_gpu_idle() returns early and
> > > i915_gem_context_fini() must keep the explicit gpu reset to be
> > > absolutely sure that the pending context writes are completed before the
> > > final context is unbound.
> > 
> > Yes, and that was what I originally had in mind. Meanwhile the scheduler
> > (will) happen and that means we won't have FIFO ordering. Which means when
> > we switch contexts (as opposed to just adding more to the ringbuffer of
> > the current one) we won't have any idea which context will be the next
> > one. Which also means we don't know which request to pick to retire the
> > old context. Hence why I think we need to be better.
> 
> But the scheduler does - it is also in charge of making sure the
> retirement queue is in order. The essence is that we only actually pin
> engine->last_context, which is chosen as we submit stuff to the hw.

Well I'm not sure how much it will reorder, but I'd expect it wants to
reorder stuff pretty freely. And as soon as it reorders context (ofc they
can't depend on each another) then the legacy hw ctx tracking won't work.

I think at least ...
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux