On Tue, Feb 20, 2024 at 03:35:25PM +0100, Andi Shyti wrote: > The hardware should not dynamically balance the load between CCS > engines. Wa_14019159160 recommends disabling it across all > platforms. > > Fixes: d2eae8e98d59 ("drm/i915/dg2: Drop force_probe requirement") > Signed-off-by: Andi Shyti <andi.shyti@xxxxxxxxxxxxxxx> > Cc: Chris Wilson <chris.p.wilson@xxxxxxxxxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Matt Roper <matthew.d.roper@xxxxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> # v6.2+ > --- > drivers/gpu/drm/i915/gt/intel_gt_regs.h | 1 + > drivers/gpu/drm/i915/gt/intel_workarounds.c | 6 ++++++ > 2 files changed, 7 insertions(+) > > diff --git a/drivers/gpu/drm/i915/gt/intel_gt_regs.h b/drivers/gpu/drm/i915/gt/intel_gt_regs.h > index 50962cfd1353..cf709f6c05ae 100644 > --- a/drivers/gpu/drm/i915/gt/intel_gt_regs.h > +++ b/drivers/gpu/drm/i915/gt/intel_gt_regs.h > @@ -1478,6 +1478,7 @@ > > #define GEN12_RCU_MODE _MMIO(0x14800) > #define GEN12_RCU_MODE_CCS_ENABLE REG_BIT(0) > +#define XEHP_RCU_MODE_FIXED_SLICE_CCS_MODE REG_BIT(1) > > #define CHV_FUSE_GT _MMIO(VLV_GUNIT_BASE + 0x2168) > #define CHV_FGT_DISABLE_SS0 (1 << 10) > diff --git a/drivers/gpu/drm/i915/gt/intel_workarounds.c b/drivers/gpu/drm/i915/gt/intel_workarounds.c > index d67d44611c28..9126b37186fc 100644 > --- a/drivers/gpu/drm/i915/gt/intel_workarounds.c > +++ b/drivers/gpu/drm/i915/gt/intel_workarounds.c > @@ -2988,6 +2988,12 @@ general_render_compute_wa_init(struct intel_engine_cs *engine, struct i915_wa_li > wa_mcr_masked_en(wal, GEN8_HALF_SLICE_CHICKEN1, > GEN7_PSD_SINGLE_PORT_DISPATCH_ENABLE); > } > + > + /* > + * Wa_14019159160: disable the CCS load balancing > + * indiscriminately for all the platforms The database's description of this workaround is a bit confusing since it's been modified a few times, but if I'm reading it correctly it doesn't sound like this is what it's asking us to do. What I see says that load balancing shouldn't be allowed specifically while the RCS is active. If the RCS is sitting idle, I believe you're free to use as many CCS engines as you like, with load balancing still active. We already have other workarounds that prevent different address spaces from executing on the RCS/CCS engines at the same time, so the part about "same address space" in the description should already be satisfied. It sounds like the issues now are if 2+ CCS's are in use and something new shows up to run on the previously-idle RCS, or if something's already running on the RCS and 1 CCS, and something new shows up to run on an additional CCS. The workaround details make it sound like it's supposed to be the GuC's responsibility to prevent the new work from getting scheduled onto the additional engine while we're already in one of those two situations, so I don't see anything asking us to change the hardware-level load balance enable/disable (actually the spec specifically tells us *not* to do this). Aren't we supposed to be just setting a GuC workaround flag for this? Matt > + */ > + wa_masked_en(wal, GEN12_RCU_MODE, XEHP_RCU_MODE_FIXED_SLICE_CCS_MODE); > } > > static void > -- > 2.43.0 > -- Matt Roper Graphics Software Engineer Linux GPU Platform Enablement Intel Corporation