Quoting Tvrtko Ursulin (2020-07-16 16:29:37) > > On 16/07/2020 12:33, Chris Wilson wrote: > > Now that we have serialised the request retirement's decoupling of the > > breadcrumb from the engine with the request signaling, we know that > > there should never be a stale breadcrumb attached to an unbound virtual > > engine. We can now remove the fixup code that had to migrate the > > breadcrumbs along with the virtual engine, from one sibling to the next. > > What do you mean by "unbound virtual engine"? I think of ve->context.inflight == NULL as being unbound. > Previous siblings[0]? We > do know that has been completed, at the point the next one is getting > dequeued, and by the virtue of breadcrumbs doing the signaling it will > have been removed from the list. But that was true before. Which leaves > me confused as to why the transfer was needed.. Was it just because > explicit wait used to be a potential signaler and that's no longer the case? Evidently we did get requests finding their way onto ve->engine[0].breadcumbs after the unsubmit. I thought I had a good explanation with a window between ACTIVE and SIGNALED, but going back to tip, those transitions are all underneath the rq->lock. However, if we submit a completed request, that is put onto the rq->engine->breadcrumb, but we do not schedule-in the context. That would cause us to have breadcrumbs on ve->engine[0] while ve->context.flight was NULL and on the next virtual request submission could switch to a new engine with stale requests. Ok, the issue of stale breadcrumbs is not completely solved yet. But this time, this time for sure!, I think I know the cause of the stale breadcrumbs! -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx