On Mon, Jul 07, 2014 at 09:05:50PM +0200, Daniel Vetter wrote: > On Thu, Jun 26, 2014 at 06:24:01PM +0100, John.C.Harrison@xxxxxxxxx wrote: > > From: John Harrison <John.C.Harrison@xxxxxxxxx> > > > > A major point of the GPU scheduler is that it re-orders batch buffers after they > > have been submitted to the driver. Rather than attempting to re-assign seqno > > values, it is much simpler to have each batch buffer keep its initially assigned > > number and modify the rest of the driver to cope with seqnos being returned out > > of order. In practice, very little code actually needs updating to cope. > > > > One such place is the retire request handler. Rather than stopping as soon as an > > uncompleted seqno is found, it must now keep iterating through the requests in > > case later seqnos have completed. There is also a problem with doing the free of > > the request before the move to inactive. Thus the requests are now moved to a > > temporary list first, then the objects de-activated and finally the requests on > > the temporary list are freed. > > I still hold that we should track requests, not seqno+ring pairs. At least > the plan with Maarten's fencing patches is to embedded the generic struct > fence into our i915_gem_request structure. And struct fence will also be > the kernel-internal represenation of a android native sync fence. > > So splatter ring+seqno->request/fence lookups all over the place isn't a > good way forward. It's ok for bring up, but for merging we should do that > kind of large-scale refactoring upfront to reduce rebase churn. Oscar > knows how this works. Aside: Maarten's driver core patches to add a struct fence have been merged for 3.17, so no the requirements to directly go to the right solution and embedded struct fence into i915_gem_request and start to refcount pointers properly is possible. Possible without intermediate hacks to add a struct kref directly to i915_gem_request as an intermediate step. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx