Re: [PATCH 1/2] drm/i915/xehp: Add compute engine ABI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 25/04/2022 19:40, Yang, Fei wrote:
--- a/drivers/gpu/drm/i915/gt/intel_gt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt.c
@@ -1175,6 +1175,7 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
   		[VIDEO_DECODE_CLASS]		= GEN12_VD_TLB_INV_CR,
   		[VIDEO_ENHANCEMENT_CLASS]	= GEN12_VE_TLB_INV_CR,
   		[COPY_ENGINE_CLASS]		= GEN12_BLT_TLB_INV_CR,
+		[COMPUTE_CLASS]			= GEN12_GFX_TLB_INV_CR,

Do you know what 0xcf04 is?

Looks like that is the TLB invalidation register for each compute context.

What does compute "context" stand for in this context, as used in bspec? Not compute command streamer? Suspiciously individual bits (eight of them) are reserved per context, just like for example in GEN12_VD_TLB_INV_CR.

Or if GEN12_GFX_TLB_INV_CR is correct then I think get_reg_and_bit()
might need adjusting to always select bit 0 for any compute engine
instance. Not sure how hardware would behave if value other than '1'
would be written into 0xced8.

I think Prathap and Fei have more familiarity with the MMIO TLB invalidation; adding them for their thoughts.

I believe GEN12_GFX_TLB_INV_CR is the right one to use because we are invalidating the TLB for each engine.

I don't understand this argument, I guess because I don't understand 0xcf04 still.

I'm not sure if we could narrow down to exact which compute context the TLB needs to be invalidated though. If that's possible it might be a bit more efficient.

Or even correct if 0xcf04 is for compute command streamers? That's my concern.

Regards,

Tvrtko



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux