Re: [PATCH 13/27] drm/i915/execlists: Pack the count into the low bits of the port.request

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 28, 2017 at 01:02:25PM +0100, Tvrtko Ursulin wrote:
> 
> On 27/04/2017 15:37, Chris Wilson wrote:
> >On Thu, Apr 20, 2017 at 03:58:19PM +0100, Tvrtko Ursulin wrote:
> >>>static void record_context(struct drm_i915_error_context *e,
> >>>diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
> >>>index 1642fff9cf13..370373c97b81 100644
> >>>--- a/drivers/gpu/drm/i915/i915_guc_submission.c
> >>>+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> >>>@@ -658,7 +658,7 @@ static void nested_enable_signaling(struct drm_i915_gem_request *rq)
> >>>static bool i915_guc_dequeue(struct intel_engine_cs *engine)
> >>>{
> >>>	struct execlist_port *port = engine->execlist_port;
> >>>-	struct drm_i915_gem_request *last = port[0].request;
> >>>+	struct drm_i915_gem_request *last = port[0].request_count;
> >>
> >>It's confusing that in this new scheme sometimes we have direct
> >>access to the request and sometimes we have to go through the
> >>port_request macro.
> >>
> >>So maybe we should always use the port_request macro. Hm, could we
> >>invent a new type to help enforce that? Like:
> >>
> >>struct drm_i915_gem_port_request_slot {
> >>	struct drm_i915_gem_request *req_count;
> >>};
> >>
> >>And then execlist port would contain these and helpers would need to
> >>be functions?
> >>
> >>I've also noticed some GVT/GuC patches which sounded like they are
> >>adding the same single submission constraints so maybe now is the
> >>time to unify the dequeue? (Haven't looked at those patches deeper
> >>than the subject line so might be wrong.)
> >>
> >>Not sure 100% of all the above, would need to sketch it. What are
> >>your thoughts?
> >
> >I forsee a use for the count in guc as well, so conversion is ok with
> >me.
> 
> Conversion to a wrapper structure as I proposed or keeping it as you
> have it?
> 
> >>>diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
> >>>index d25b88467e5e..39b733e5cfd3 100644
> >>>--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
> >>>+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
> >>>@@ -377,8 +377,12 @@ struct intel_engine_cs {
> >>>	/* Execlists */
> >>>	struct tasklet_struct irq_tasklet;
> >>>	struct execlist_port {
> >>>-		struct drm_i915_gem_request *request;
> >>>-		unsigned int count;
> >>>+		struct drm_i915_gem_request *request_count;
> >>
> >>Would req(uest)_slot maybe be better?
> >
> >It's definitely a count (of how many times this request has been
> >submitted), and I like long verbose names when I don't want them to be
> >used directly. So expect guc to be tidied.
> 
> It is a pointer and a count. My point was that request_count sounds
> too much like a count of how many times has something been done or
> happened to the request.
> 
> Request_slot was my attempt to make it obvious in the name itself
> there is more to it. And the wrapper struct was another step
> further, plus the idea was it would make sure you always need to
> access this field via the accessor. Since I think going sometimes
> directly and sometimes via wrapper is too fragile.

I read slot as port[slot], whereas I am using as a count of how many
times I have done something with this request/context.

> Anyway, my big issue is I am not sure if we are in agreement or not.
> Do you agree going with the wrapper structure makes sense or not?

I'm using port_request() in guc, see the version in #prescheduler.

What I haven't come up with is a good plan for assignment, which
is still using port->request_count = port_pack() but now that is limited
to just the port_assign() functions.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux