Quoting Antonio Argenziano (2018-08-16 00:59:30) > > > On 15/08/18 10:24, Chris Wilson wrote: > > Quoting Antonio Argenziano (2018-08-15 18:20:10) > >> > >> > >> On 15/08/18 03:26, Chris Wilson wrote: > >>> Quoting Antonio Argenziano (2018-08-15 00:50:43) > >>>> > >>>> > >>>> On 10/08/18 04:01, Chris Wilson wrote: > >>>>> This exercises a special case that may be of interest, waiting for a > >>>>> context that may be preempted in order to reduce the wait. > >>>>> > >>>>> Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > >>>>> --- > >>>>> + cycles = 0; > >>>>> + elapsed = 0; > >>>>> + start = gettime(); > >>>>> + do { > >>>>> + do { > >>>>> + double this; > >>>>> + > >>>>> + gem_execbuf(fd, &contexts[0].execbuf); > >>>>> + gem_execbuf(fd, &contexts[1].execbuf); > >>>> > >>>> I'm not sure where the preemption, mentioned in the commit message, is > >>>> coming in. > >>> > >>> Internally. I've suggested that we reorder equivalent contexts in order > >>> to satisfy client waits earlier. So having created two independent > >>> request queues, userspace should be oblivious to the execution order. > >> > >> But there isn't an assert because you don't want that to be part of the > >> contract between the driver and userspace, is that correct? > > > > Correct. Userspace hasn't specified an order between the two contexts so > > can't actually assert it happens in a particular order. We are free then > > to do whatever we like, but that also means no assertion. Just the > > figures look pretty and ofc we have to check that nothing actually > > breaks. > > The last question I have is about the batches, why not choosing a spin > batch so to make sure that context[0] (and [1]) hasn't completed by the > time it starts waiting. It would be exercising fewer possibilities. Not that it would be any less valid. (If I can't do a pair of trivial execbuf faster than the gpu can execute a no-op from idle, shoot me. Each execbuf will take ~500ns, the gpu will take 20-50us [bdw-kbl] to execute the first batch from idle.) -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx