I think we are also bottom-ing on the opens fo this patch too: On Thu, 2023-04-20 at 13:21 -0700, Ceraolo Spurio, Daniele wrote: > On 4/20/2023 11:49 AM, Teres Alexis, Alan Previn wrote: > > On Wed, 2023-03-29 at 09:56 -0700, Ceraolo Spurio, Daniele wrote: > > > The GSC notifies us of a proxy request via the HECI2 interrupt. The > > alan:snip > > > > > @@ -256,6 +262,7 @@ void gen11_gt_irq_postinstall(struct intel_gt *gt) > > > u32 irqs = GT_RENDER_USER_INTERRUPT; > > > u32 guc_mask = intel_uc_wants_guc(>->uc) ? GUC_INTR_GUC2HOST : 0; > > > u32 gsc_mask = 0; > > > + u32 heci_mask = 0; > > alan: nit: should we call this heci2_mask - uniquely identifiable from legacy. > > The mask is theoretically not just for HECI2, the bit we set in it is. > If future platforms add back the HECI1 interrupt then it'd go in the > same mask. alan: yeah - im good with that - no change needed then - was a nit anyways. > > alan:snip > > > > > - else if (HAS_HECI_GSC(gt->i915)) > > > + heci_mask = GSC_IRQ_INTF(1); /* HECI2 IRQ for SW Proxy*/ > > alan: does this GSC_IRQ_INTF macro still make sense for this newer platforms with GSC-CS / HECI2? > > i dont think i see other bit definitions for this mask register, so might else well just define it as BIT14? > > GSC_IRQ_INTF(1) is the HECI2 interrupt on DG2 and the bit has remained > the same exactly to allow SW to re-use the same code/defines, so IMO we > should do so. alan: okay sure - sounds good. > > > diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h > > > index 4aecb5a7b631..da11ce5ca99e 100644 > > > --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h > > > +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h > > > @@ -1577,6 +1577,7 @@ > > > > > > #define GEN11_GT_INTR_DW(x) _MMIO(0x190018 + ((x) * 4)) > > > #define GEN11_CSME (31) > > > +#define GEN12_HECI_2 (30) > > alan: I dont see this being used anywhere - should remove this. > > A few of the defines for this register here are not used. I've added > this one in as a way of documenting where the bit was, but I can remove > it if you think it's unneeded. alan: Oh i actually didnt notice that earlier - in that case, please keep it would be nice to add a comment for all of those such bits (consider this a nit). > > > > > +void intel_gsc_proxy_irq_handler(struct intel_gsc_uc *gsc, u32 iir) > > > +{ > > > + struct intel_gt *gt = gsc_uc_to_gt(gsc); > > > + > > > + if (unlikely(!iir)) > > > + return; > > > + > > > + lockdep_assert_held(gt->irq_lock); > > > + > > > + if (!gsc->proxy.component) { > > > + gt_err(gt, "GSC proxy irq received without the component being bound!\n"); > > alan: So although proxy init is only a one-time thing (even surviving suspend-resume), we > > know that proxy runtime irqs could happen anytime (depending on other gpu-security-related > > system interactions). However, would the component driver be bound/unbound through a > > suspend-resume cycle? If yes, then would this check fail if an IRQ for a proxy request > > came too early after a resume cycle. If yes, then should we not do somethign here to ensure that > > when the component gets bound later, we know there is something waiting to be processed? > > (in bind, if proxy-init was done before, but we have outstanding IRQs, then we can trigger > > the worker from there, i.e. the bind func?) > > A proxy request can only be triggered in response to a userspace ask, > which in turn can only happen once we've completed the resume flow and > the component is re-bound. Therefore, we should never have a situation > where we get a proxy interrupt without the component being bound. alan: thanks -my bad. > > > > > --- a/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.h > > > +++ b/drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.h > > > @@ -23,6 +23,9 @@ struct intel_gsc_uc { > > > /* for delayed load and proxy handling */ > > > struct workqueue_struct *wq; > > > struct work_struct work; > > > + u32 gsc_work_actions; /* protected by gt->irq_lock */ > > > +#define GSC_ACTION_FW_LOAD BIT(0) > > > +#define GSC_ACTION_SW_PROXY BIT(1 > > > > > alan: perhaps its time to have a substruct for "proxy_worker" and include > > 'wq' and 'work' in additional to actions? > > The worker is not just for proxy, we use it for a GSC and HuC loading as > well. It's the main way we handle the gsc_uc, so IMO it's better off > staying in the main struct. However, if you still think a substructure > will make things cleaner I can add it in, but please suggest a name > because I have no idea what to call it (something like handler?). alan: i was thinking "task_handler" - but since its not specific to proxy, then let's not change it.