On Wed, 24 Aug 2022 22:03:19 -0700, Dixit, Ashutosh wrote: > > On Thu, 04 Aug 2022 16:21:25 -0700, Umesh Nerlige Ramappa wrote: > > > > Hi Umesh, > > Still reviewing but I have a question below. Please ignore this mail for now, mostly a result of my misunderstanding the code. I will ask again if I have any questions. Thanks. > > > diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c > > index 654a092ed3d6..e2d70a9fdac0 100644 > > --- a/drivers/gpu/drm/i915/gt/intel_context.c > > +++ b/drivers/gpu/drm/i915/gt/intel_context.c > > @@ -576,16 +576,24 @@ void intel_context_bind_parent_child(struct intel_context *parent, > > child->parallel.parent = parent; > > } > > > > -u64 intel_context_get_total_runtime_ns(const struct intel_context *ce) > > +u64 intel_context_get_total_runtime_ns(struct intel_context *ce) > > { > > u64 total, active; > > > > + if (ce->ops->update_stats) > > + ce->ops->update_stats(ce); > > + > > /snip/ > > > @@ -1396,6 +1399,10 @@ static void guc_timestamp_ping(struct work_struct *wrk) > > with_intel_runtime_pm(>->i915->runtime_pm, wakeref) > > __update_guc_busyness_stats(guc); > > > > + /* adjust context stats for overflow */ > > + xa_for_each(&guc->context_lookup, index, ce) > > + __guc_context_update_clks(ce); > > + > > The question is why do we have 2 functions: __guc_context_update_clks() > (which we call periodically from guc_timestamp_ping()) and > guc_context_update_stats() (which we call non-periodically from > intel_context_get_total_runtime_ns()? Why don't we have just one function > which is called from both places? Or rather why don't we call > guc_context_update_stats() from both places? > > If we don't call guc_context_update_stats() periodically from > guc_timestamp_ping() how e.g. does ce->stats.runtime.start_gt_clk get reset > to 0? If it gets reset to 0 in __guc_context_update_clks() then why do we > need to reset it in guc_context_update_stats()? > > Also IMO guc->timestamp.lock should be taken by this single function, > (otherwise guc_context_update_stats() is modifying > ce->stats.runtime.start_gt_clk without taking the lock). > > Thanks. > -- > Ashutosh > > > +static void __guc_context_update_clks(struct intel_context *ce) > > +{ > > + struct intel_guc *guc = ce_to_guc(ce); > > + struct intel_gt *gt = ce->engine->gt; > > + u32 *pphwsp, last_switch, engine_id; > > + u64 start_gt_clk, active; > > + unsigned long flags; > > + ktime_t unused; > > + > > + spin_lock_irqsave(&guc->timestamp.lock, flags); > > + > > + /* > > + * GPU updates ce->lrc_reg_state[CTX_TIMESTAMP] when context is switched > > + * out, however GuC updates PPHWSP offsets below. Hence KMD (CPU) > > + * relies on GuC and GPU for busyness calculations. Due to this, A > > + * potential race was highlighted in an earlier review that can lead to > > + * double accounting of busyness. While the solution to this is a wip, > > + * busyness is still usable for platforms running GuC submission. > > + */ > > + pphwsp = ((void *)ce->lrc_reg_state) - LRC_STATE_OFFSET; > > + last_switch = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_STAMP_LO]); > > + engine_id = READ_ONCE(pphwsp[PPHWSP_GUC_CONTEXT_USAGE_ENGINE_ID]); > > + > > + guc_update_pm_timestamp(guc, &unused); > > + > > + if (engine_id != 0xffffffff && last_switch) { > > + start_gt_clk = READ_ONCE(ce->stats.runtime.start_gt_clk); > > + __extend_last_switch(guc, &start_gt_clk, last_switch); > > + active = intel_gt_clock_interval_to_ns(gt, guc->timestamp.gt_stamp - start_gt_clk); > > + WRITE_ONCE(ce->stats.runtime.start_gt_clk, start_gt_clk); > > + WRITE_ONCE(ce->stats.active, active); > > + } else { > > + lrc_update_runtime(ce); > > + } > > + > > + spin_unlock_irqrestore(&guc->timestamp.lock, flags); > > +} > > + > > +static void guc_context_update_stats(struct intel_context *ce) > > +{ > > + if (!intel_context_pin_if_active(ce)) { > > + WRITE_ONCE(ce->stats.runtime.start_gt_clk, 0); > > + WRITE_ONCE(ce->stats.active, 0); > > + return; > > + } > > + > > + __guc_context_update_clks(ce); > > + intel_context_unpin(ce); > > +}