On Sat, 2016-11-05 at 00:32 +0200, Imre Deak wrote: > On Fri, 2016-11-04 at 21:01 +0000, Chris Wilson wrote: > > On Fri, Nov 04, 2016 at 10:33:24PM +0200, Imre Deak wrote: > > > On Thu, 2016-11-03 at 21:14 +0000, Chris Wilson wrote: > > > > Where is that guaranteed? I thought we only serialised with the > > > > pm > > > > interrupts. Remember this happens before rpm suspend, since > > > > gem_idle_work_handler is responsible for dropping the GPU > > > > wakelock. > > > > > > I meant that the 100msec after the last request signals > > > completion > > > and > > > this handler is scheduled is normally enough for the context > > > complete > > > interrupt to get delivered. But yea, it's not a guarantee. > > > > If only it was that deterministic! The idle_worker was scheduled > > 100ms > > after some retire_worker, just not necessarily the most recent. So > > it > > could be running exactly as active_requests -> 0 and so before the > > context-interrupt. > > Right, but we don't poll in that case, so there is no overhead. Ok, there is a small window in the idle_worker after the unlocked poll and before taking the lock where a new request could be submitted and retired. In that case active_requests could be 0 after taking the lock and we'd have the poll overhead there. We could detect this by the fact that there is a new idle_worker pending and bail out in that case. We shouldn't idle the GPU in that case anyway. > > Anyway, it was a good find! > > -Chris > > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > https://lists.freedesktop.org/mailman/listinfo/intel-gfx _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx