On ti, 2016-09-13 at 08:38 +0100, Chris Wilson wrote: > On Mon, Sep 12, 2016 at 11:57:54PM +0300, Imre Deak wrote: > > On Mon, 2016-09-12 at 21:04 +0100, Chris Wilson wrote: > > > On Mon, Sep 12, 2016 at 05:47:57PM +0300, Imre Deak wrote: > > > > Even in an otherwise quiescent system there may be user/kernel > > > > threads > > > > independent of the test that add enough latency to make timing > > > > sensitive > > > > subtests fail. Boost the priority of such subtests to avoid > > > > these > > > > failures. > > > > > > > > This got rid of sporadic failures in basic-cursor-vs-flip- > > > > legacy > > > > and > > > > basic-cursor-vs-flip-varying-size with 'missed 1 frame' error > > > > message > > > > APL and BSW. > > > > > > > > v2: > > > > - Boost the priority in flip_vs_cursor_crc() too. > > > > > > > > CC: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > > > CC: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxxxx> > > > > Signed-off-by: Imre Deak <imre.deak@xxxxxxxxx> > > > > > > But we shouldn't need to. The basic test is: > > > > > > align to vblank > > > request non-blocking flip > > > update cursor > > > > In these subtests we run these cursor updates in a loop. > > Oh, those. Ok, for the purpose of bat we want: > > align to vblank > update cursor > request non-blocking flip > check vblank == vblank > check flip-completion == vblank + 1 That's basic_flip_vs_cursor, the ones failing are the cursor_vs_flip_* running the cursor update in a separate thread. So are you suggesting just removing these from bat or doing only a single cursor update (target=1)? The latter would reduce the chance for failure, but wouldn't eliminate it. > > > check vblank hasn't advanced > > > > > > We are not doing any busy loops here and there should be nothing > > > else > > > running on the system. So what caused the context switch? Who are > > > we > > > fighting against? > > > > The cursor thread is one source for the delay, other than that it > > could > > be anything running in the background. In my traces it looked like > > something related to CI remote logging that caused >16ms delay for > > both > > the user flip thread and the subsequent MMIO work. Imo there is no > > guarantee that such delays won't happen between threads running at > > the > > same priority, hence the need for higher priority for timing > > sensitive > > stuff. Note that we see this problem on BSW with with 2 CPUs. > > > > > If the only thing that is causing the issue is the > > > kernel thread used for the mmioflip (which won't be scheduled for > > > another 16ms until the next vblank), we have another bug to track > > > down. > > > > The MMIO flip work is scheduled right after we request the flip > > (since > > we do the request after the previous flip completed) and I saw it > > being > > delayed >16ms for the above reasons. Besides this I also saw the > > user > > space flip thread being delayed the same way. > > > > > Imo, this patch is just papering over an issue that as it stands > > > will > > > be > > > present in real userspace (i.e. causing jerkiness in X, weston, > > > cros > > > etc). > > > > I can't see any other way than adjusting priorities to guarantee > > the > > timely completion of some work. Otherwise you'll only get best > > effort > > scheduling and that doesn't seem to be enough in these subtests. > > Our worker has multiple phases and waits of which only a small > portion is > timing crucial. We don't want to boost the priority of everything it > does, only the reprogramming of the registers within the next vblank. > The inputs to that crucial phase are irq driven (be they render > completion on any dmabuf device, or delay until after vblank) and we > could > move the mmio into that irq context and that would avoid scheduling > issues > on all but the RT systems that want threaded irqs. There are differences in how time critical the different phases are (for example preparing vs. register updating for vblank evasion), but the whole work of queuing the flip is time critical. There is one frame time to complete that work when a single flip can be queued. --Imre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx