Re: [PATCH 1/2] drm/i915/xehp: Add compute engine ABI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> > --- a/drivers/gpu/drm/i915/gt/intel_gt.c
>> > +++ b/drivers/gpu/drm/i915/gt/intel_gt.c
>> > @@ -1175,6 +1175,7 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
>> >   		[VIDEO_DECODE_CLASS]		= GEN12_VD_TLB_INV_CR,
>> >   		[VIDEO_ENHANCEMENT_CLASS]	= GEN12_VE_TLB_INV_CR,
>> >   		[COPY_ENGINE_CLASS]		= GEN12_BLT_TLB_INV_CR,
>> > +		[COMPUTE_CLASS]			= GEN12_GFX_TLB_INV_CR,
>> 
>> Do you know what 0xcf04 is?

Looks like that is the TLB invalidation register for each compute context.

>> 
>> Or if GEN12_GFX_TLB_INV_CR is correct then I think get_reg_and_bit() 
>> might need adjusting to always select bit 0 for any compute engine 
>> instance. Not sure how hardware would behave if value other than '1' 
>> would be written into 0xced8.
> 
> I think Prathap and Fei have more familiarity with the MMIO TLB invalidation; adding them for their thoughts.

I believe GEN12_GFX_TLB_INV_CR is the right one to use because we are invalidating the TLB for each engine.
I'm not sure if we could narrow down to exact which compute context the TLB needs to be invalidated though. If that's possible it might be a bit more efficient.

> Matt

>> 
>> Regards,
>> 
>> Tvrtko




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux