On 10/12/14 17:15, Daniel Vetter wrote: > On Wed, Dec 10, 2014 at 04:33:14PM +0000, Dave Gordon wrote: >> On 10/12/14 10:58, Daniel Vetter wrote: >>> On Tue, Dec 09, 2014 at 12:59:11PM +0000, John.C.Harrison@xxxxxxxxx wrote: >>>> From: John Harrison <John.C.Harrison@xxxxxxxxx> >>>> >>>> The scheduler decouples the submission of batch buffers to the driver with their >>>> submission to the hardware. This basically means splitting the execbuffer() >>>> function in half. This change rearranges some code ready for the split to occur. >>> >>> Now there's the curios question: Where will the split in top/bottom halves >>> be? Without that I have no idea whether it makes sense and so can't review >>> this patch. At least if the goal is to really prep for the scheduler and >>> not just move a few lines around ;-) >>> -Daniel >>> >> [snip] >>>> >>>> + i915_gem_execbuffer_move_to_active(vmas, ring); >>>> + >>>> + /* To be split into two functions here... */ >>>> + >>>> + /* Unconditionally invalidate gpu caches and ensure that we do flush >>>> + * any residual writes from the previous batch. >>>> + */ >>>> + ret = logical_ring_invalidate_all_caches(ringbuf); >> >> It'll be where the marker comment is above. Ahead of that point is stuff >> to do with setting up software state; after that we're talking to h/w. >> When the scheduler code goes it, it decouples the two by interposing at >> this point. Then batches go into it with s/w state set up, but don't get >> to talk to the h/w until they're selected for execution, possibly in a >> different order. > > Oops, I guess I need see a doctor to check my eyesight ;-) > > Two comments about code that's still in the bottom half: > - There's lots of input validation code still below that cutoff point > which needs to be moved above to still be able to return -EINVAL. Test > coverage is the critical part here, but I think we've closed most of our > gaps. I guess a future patch will address this? Yes, all validation will end up before the batch gets pushed into the scheduler queue. It had already been shuffled there in an earlier version, but some of it then got relocated into less convenient places during the legacy-vs-execlist split. So for now, the execbuffer path has some common code (including early validation), then a split by submission mechanism (with additional validation that may someday get reconverged into the common section); and back to common code, and /then/ the batches get passed to the scheduler, whereafter there won't be any more EINVAL cases (though there will be other error cases, such as ENOMEM). Then as and when batches are selected for execution, they're passed to the specific backend submission mechanism code (which of course won't contain any further validation code either). > - retire_commands and so add_request are also in the cutoff. For shrinker > interactions to work seamlessly we need to have both the objects on the > active list and the request in the lists. So I expect more untangling > there to separate the request emission to rings from the bookkeeping > parts in add_request. > > Also move_to_active will fall apart once we start to reorder requests I > think. Need to think about this case more, but we definitely need some > testcases with depencies and reordering by the scheduler. > -Daniel I'll leave the rest to John to comment on, as he's still rebasing this section of the scheduler on top of all the other changes that have been going on ... .Dave. _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx