Re: [RFC v2] drm/i915: Android native sync support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 25, 2015 at 12:46:31PM -0800, Jesse Barnes wrote:
> On 01/28/2015 02:07 AM, Chris Wilson wrote:
> > On Wed, Jan 28, 2015 at 10:50:18AM +0100, Daniel Vetter wrote:
> >> On Wed, Jan 28, 2015 at 09:23:46AM +0000, Chris Wilson wrote:
> >>> On Wed, Jan 28, 2015 at 10:22:15AM +0100, Daniel Vetter wrote:
> >>>> On Mon, Jan 26, 2015 at 09:08:03AM +0000, Chris Wilson wrote:
> >>>>> On Mon, Jan 26, 2015 at 08:52:39AM +0100, Daniel Vetter wrote:
> >>>>>> I think the problem will be platforms that want full explicit fence (like
> >>>>>> android) but allow delayed creation of the fence fd from a gl sync object
> >>>>>> (like the android egl extension allows).
> >>>>>>
> >>>>>> I'm not sure yet how to best expose that really since just creating a
> >>>>>> fence from the implicit request attached to the batch might upset the
> >>>>>> interface purists with the mix in implicit and explicit fencing ;-) Hence
> >>>>>> why I think for now we should just do the eager fd creation at execbuf
> >>>>>> until ppl scream (well maybe not merge this patch until ppl scream ...).
> >>>>>
> >>>>> Everything we do is buffer centric. Even in the future with random bits
> >>>>> of memory, we will still use buffers behind the scenes. From an
> >>>>> interface perspective, it is clearer to me if we say "give me a fence for
> >>>>> this buffer". Exactly the same way as we say "is this buffer busy" or
> >>>>> "wait on this buffer". The change is that we now hand back an fd to slot
> >>>>> into an event loop. That, to me, is a much smaller evolutionary step of
> >>>>> the existing API, and yet more versatile than just attaching one to the
> >>>>> execbuf.
> >>>>
> >>>> The problem is that big parts of the world do not subscribe to that buffer
> >>>> centric view of gfx. Imo since those parts will be the primary users of
> >>>> this interface we should try to fit their ideas as much as feasible. Later
> >>>> on (if we need it) we can add some glue to tie in the buffer-centric
> >>>> implicit model with the explicit model.
> >>>
> >>> They won't be using execbuffer either. So what's your point?
> >>
> >> Android very much will user execbuffer. And even the in-flight buffered
> >> svm stuff will keep on using execbuf (just without any relocs).
> > 
> > So still buffer-centric and would benefit from the more general and more
> > explict fencing rather than just execbuf. If you look at the throttling
> > in mesa, you can already see a place that would rather create a fence on
> > a buffer rather than its very approximate execbuf tracking.
> >  
> >> And once we indeed can make the case (plus have the hw) for direct
> >> userspace submission I think the kernel shouldn't be in the business of
> >> creating fence objects: The ring will be fully under control of
> >> userspace, and that's the only place we could insert a seqno into. So
> >> again the same trust issues.
> > 
> > No buffers, no requests, nothing for the kernel to do. No impact on
> > designing an API that is useful today.
> 
> If Mesa really wants this, we should investigate intra-batch fences
> again, both with and without buffer tracking (because afaik Mesa wants
> bufferless SVM too).
> 
> But you said you think an fd is too heavyweight even?  What do you mean
> by that?  Or were you just preferring re-use of an existing object (i.e.
> the buffer) that we already track & pass around?

Mostly it is burn from X using select() and so we see fd handling very
high on the profiles when all X has to do is flip.

However, we can and do have up to several thousand batches in flight,
and many more pending retirement from userspace. That is a scary
prospect if I wanted to replace the signalling on the buffer with
individual fences, both from the scaling issue and running into resource
limits (i.e. back to the reason why our bo are currently fd-less).
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx





[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux