Quoting Mika Kuoppala (2020-10-07 10:40:59) > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > > The initial breadcrumb marks the transition from context wait and setup > > into the request payload. We use the marker to determine if the request > > is merely waiting to begin, or is inside the payload and hung. > > Forgetting to include a breadcrumb before the user payload would mean we > > do not reset the guilty user request, and conversely if the initial > > breadcrumb is too early we blame the user for a problem elsewhere. > > Agreed. > > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/gem/i915_gem_client_blt.c | 18 +++++++++--------- > > 1 file changed, 9 insertions(+), 9 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c > > index 272cf3ea68d5..44821d94544f 100644 > > --- a/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_client_blt.c > > @@ -202,12 +202,6 @@ static void clear_pages_worker(struct work_struct *work) > > if (unlikely(err)) > > goto out_request; > > > > - if (w->ce->engine->emit_init_breadcrumb) { > > - err = w->ce->engine->emit_init_breadcrumb(rq); > > - if (unlikely(err)) > > - goto out_request; > > - } > > - > > /* > > * w->dma is already exported via (vma|obj)->resv we need only > > * keep track of the GPU activity within this vma/request, and > > @@ -217,9 +211,15 @@ static void clear_pages_worker(struct work_struct *work) > > if (err) > > goto out_request; > > > > - err = w->ce->engine->emit_bb_start(rq, > > - batch->node.start, batch->node.size, > > - 0); > > We don't have to do any extra steps to counter > > __i915_vma_move_to_active(vma, rq); > > in error path? No. We always submit the request, and the request is always serialised with the steps that have been setup before. So if we fail in adding another serialisation step, we can safely abort and complete all the steps prior to this point (by submitting the empty request), and anything that subsequently wants to wait on those resources we have already claimed will wait on this request (which in turn waits on the previous resource holders and so forth, the lock chain is never broken). -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx