Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt notification for guc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 10, 2017 at 02:40:24AM +0000, Dong, Chuanxiao wrote:
> > -----Original Message-----
> > From: intel-gvt-dev [mailto:intel-gvt-dev-bounces@xxxxxxxxxxxxxxxxxxxxx] On
> > Behalf Of Dong, Chuanxiao
> > Sent: Thursday, April 6, 2017 11:19 PM
> > To: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > Cc: Tian, Kevin <kevin.tian@xxxxxxxxx>; intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx;
> > intel-gfx@xxxxxxxxxxxxxxxxxxxxx; joonas.lahtinen@xxxxxxxxxxxxxxx; Zheng,
> > Xiao <xiao.zheng@xxxxxxxxx>; Wang, Zhi A <zhi.a.wang@xxxxxxxxx>
> > Subject: RE: [PATCH v2 2/2] drm/i915/scheduler: add gvt notification for guc
> > 
> > 
> > 
> > > -----Original Message-----
> > > From: Chris Wilson [mailto:chris@xxxxxxxxxxxxxxxxxx]
> > > Sent: Thursday, April 6, 2017 11:07 PM
> > > To: Dong, Chuanxiao
> > > Cc: intel-gfx@xxxxxxxxxxxxxxxxxxxxx;
> > > intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx;
> > > Zheng, Xiao; Tian, Kevin; joonas.lahtinen@xxxxxxxxxxxxxxx; Wang, Zhi A
> > > Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt notification
> > > for guc
> > >
> > > On Thu, Apr 06, 2017 at 02:49:54PM +0000, Dong, Chuanxiao wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Chris Wilson [mailto:chris@xxxxxxxxxxxxxxxxxx]
> > > > > Sent: Thursday, April 6, 2017 10:19 PM
> > > > > To: Dong, Chuanxiao
> > > > > Cc: intel-gfx@xxxxxxxxxxxxxxxxxxxxx;
> > > > > intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx;
> > > > > Zheng, Xiao; Tian, Kevin; joonas.lahtinen@xxxxxxxxxxxxxxx
> > > > > Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt
> > > > > notification for guc
> > > > >
> > > > > On Thu, Apr 06, 2017 at 02:05:15PM +0000, Dong, Chuanxiao wrote:
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Chris Wilson [mailto:chris@xxxxxxxxxxxxxxxxxx]
> > > > > > > Sent: Thursday, April 6, 2017 9:32 PM
> > > > > > > To: Dong, Chuanxiao
> > > > > > > Cc: intel-gfx@xxxxxxxxxxxxxxxxxxxxx;
> > > > > > > intel-gvt-dev@xxxxxxxxxxxxxxxxxxxxx;
> > > > > > > Zheng, Xiao; Tian, Kevin; joonas.lahtinen@xxxxxxxxxxxxxxx
> > > > > > > Subject: Re: [PATCH v2 2/2] drm/i915/scheduler: add gvt
> > > > > > > notification for guc
> > > > > > >
> > > > > > > On Tue, Mar 28, 2017 at 05:38:41PM +0800, Chuanxiao Dong wrote:
> > > > > > > > GVT request needs a manual mmio load/restore. Before GuC
> > > > > > > > submit a request, send notification to gvt for mmio loading.
> > > > > > > > And after the GuC finished this GVT request, notify gvt
> > > > > > > > again for
> > > mmio restore.
> > > > > > > > This follows the usage when using execlists submission.
> > > > > > > >
> > > > > > > > Cc: xiao.zheng@xxxxxxxxx
> > > > > > > > Cc: kevin.tian@xxxxxxxxx
> > > > > > > > Cc: joonas.lahtinen@xxxxxxxxxxxxxxx
> > > > > > > > Cc: chris@xxxxxxxxxxxxxxxxxx
> > > > > > > > Signed-off-by: Chuanxiao Dong <chuanxiao.dong@xxxxxxxxx>
> > > > > > > > ---
> > > > > > > >  drivers/gpu/drm/i915/i915_guc_submission.c |  4 ++++
> > > > > > > >  drivers/gpu/drm/i915/intel_gvt.h           | 12 ++++++++++++
> > > > > > > >  drivers/gpu/drm/i915/intel_lrc.c           | 21 +++------------------
> > > > > > > >  3 files changed, 19 insertions(+), 18 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > b/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > index 58087630..d8a5942 100644
> > > > > > > > --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> > > > > > > > @@ -606,6 +606,8 @@ static void __i915_guc_submit(struct
> > > > > > > drm_i915_gem_request *rq)
> > > > > > > >  	unsigned long flags;
> > > > > > > >  	int b_ret;
> > > > > > > >
> > > > > > > > +	intel_gvt_notify_context_status(rq,
> > > > > > > > +INTEL_CONTEXT_SCHEDULE_IN);
> > > > > > > > +
> > > > > > > >  	/* WA to flush out the pending GMADR writes to ring buffer.
> > > */
> > > > > > > >  	if (i915_vma_is_map_and_fenceable(rq->ring->vma))
> > > > > > > >  		POSTING_READ_FW(GUC_STATUS); @@ -725,6
> > > +727,8 @@ static
> > > > > > > > void i915_guc_irq_handler(unsigned long
> > > > > data)
> > > > > > > >  		rq = port[0].request;
> > > > > > > >  		while (rq && i915_gem_request_completed(rq)) {
> > > > > > > >  			trace_i915_gem_request_out(rq);
> > > > > > > > +			intel_gvt_notify_context_status(rq,
> > > > > > > > +
> > > 	INTEL_CONTEXT_SCHEDULE_OUT);
> > > > > > >
> > > > > > > This is incorrect though. This is no better than just waiting
> > > > > > > for the request, which is not enough since the idea is that
> > > > > > > you need to wait for the context image to be completely
> > > > > > > written to memory before
> > > > > you read it.
> > > > > > > -Chris
> > > > > >
> > > > > > The wait for the context image to be completely written will be
> > > > > > done in the
> > > > > notification from the GVT, by checking the CSB. If put the wait
> > > > > here will made each i915 request to wait, which seems not necessary.
> > > > >
> > > > > Urm, no. I hope you mean the wait will be on some other thread
> > > > > than inside this interrupt handler.
> > > >
> > > > The SCHEDULE_OUT means to stop GuC to submit another request
> > before
> > > the current one is completed by GVT so GVT can manually restore the
> > MMIO.
> > > So this irq handler should wait until SCHEDULE_OUT is completed. How
> > > it possible to make this irq handler to wait for another thread? From
> > > the current software architecture there is no other thread....
> > >
> > > No. It is not acceptable to have any blocking here. Rather you
> > > delegate the polling of CSB to a thread/worker that you kick off from this
> > notify.
> > > -Chris
> > 
> > The major issue is that we should wait for the context image to be
> > completely written to memory before GVT read it. I will double check if we
> > are really reading from context image in this SCHEDULE_OUT event and
> > return back later.
> 
> Hi Chris,
> 
> Had a discussion with Zhi, actually the context image is accessed from the workload_thread to update the guest context but not directly from the SCHEDULE_OUT event. So my previous comment is wrong and the CSB waiting should be in workload_thread instead of this IRQ handler.

That's better, still a little worried about having to remember to
review these hooks later.

The other concern I express at the start of this was:

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9e327049b735..5a86abd715df 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -430,16 +430,6 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
                        if (port != engine->execlist_port)
                                break;
 
-                       /* If GVT overrides us we only ever submit port[0],
-                        * leaving port[1] empty. Note that we also have
-                        * to be careful that we don't queue the same
-                        * context (even though a different request) to
-                        * the second port.
-                        */
-                       if (ctx_single_port_submission(last->ctx) ||
-                           ctx_single_port_submission(cursor->ctx))
-                               break;
-


Afaict that requirement is completely bogus (since your workloads do
only active on gvt requests and there will only ever be one scheduled-in
at any time) and burdensome on non-gvt.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux