Re: [PATCH] drm/i915: Use correct huge page manager for MTL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----Original Message-----
From: Auld, Matthew <matthew.auld@xxxxxxxxx> 
Sent: Monday, February 27, 2023 3:20 AM
To: Cavitt, Jonathan <jonathan.cavitt@xxxxxxxxx>; intel-gfx@xxxxxxxxxxxxxxxxxxxxx
Cc: Dutt, Sudeep <sudeep.dutt@xxxxxxxxx>; thomas.hellstrom@xxxxxxxxxxxxxxx; maarten.lankhorst@xxxxxxxxxxxxxxx; Vetter, Daniel <daniel.vetter@xxxxxxxxx>; De Marchi, Lucas <lucas.demarchi@xxxxxxxxx>; chris.p.wilson@xxxxxxxxxxxxxxx
Subject: Re: [PATCH] drm/i915: Use correct huge page manager for MTL
> 
> On 24/02/2023 17:40, Jonathan Cavitt wrote:
> > MTL currently uses gen8_ppgtt_insert_huge when managing huge pages.  This is because
> > MTL reports as not supporting 64K pages, or more accurately, the system that reports
> > whether a platform has 64K pages reports false for MTL.  This is only half correct,
> > as the 64K page support reporting system only cares about 64K page support for LMEM,
> > which MTL doesn't have.
> > 
> > MTL should be using xehpsdv_ppgtt_insert_huge.  However, simply changing over to
> > using that manager doesn't resolve the issue because MTL is expecting the virtual
> > address space for the page table to be flushed after initialization, so we must also
> > add a flush statement there.
> > 
> > The changes made to the huge page manager selection indirectly impacted some of the
> > mock hugepage selftests.  Due to the use of pte level hints, rather than pde level
> > hints, we now expect 64K page sizes to be properly reported by the GTT, so we should
> > correct the expected test results to reflect this change.
> 
> For future submissions, please add the version number for each new 
> submission of the same patch, and also please include the changelog.

I thought we weren't supposed to include that information?  Or is that just a "from internal to upstream" thing?
-Jonathan Cavitt

> 
> > 
> > Signed-off-by: Jonathan Cavitt <jonathan.cavitt@xxxxxxxxx>
> > ---
> >   drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 11 ++++-------
> >   drivers/gpu/drm/i915/gt/gen8_ppgtt.c            |  3 ++-
> >   2 files changed, 6 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
> > index defece0bcb81..06554717495f 100644
> > --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
> > +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
> > @@ -784,9 +784,6 @@ static int igt_mock_ppgtt_huge_fill(void *arg)
> >   		GEM_BUG_ON(!expected_gtt);
> >   		GEM_BUG_ON(size);
> >   
> > -		if (expected_gtt & I915_GTT_PAGE_SIZE_4K)
> > -			expected_gtt &= ~I915_GTT_PAGE_SIZE_64K;
> > -
> >   		i915_vma_unpin(vma);
> >   
> >   		if (vma->page_sizes.sg & I915_GTT_PAGE_SIZE_64K) {
> > @@ -849,7 +846,7 @@ static int igt_mock_ppgtt_64K(void *arg)
> >   		},
> >   		{
> >   			.size = SZ_64K + SZ_4K,
> > -			.gtt = I915_GTT_PAGE_SIZE_4K,
> > +			.gtt = I915_GTT_PAGE_SIZE_64K | I915_GTT_PAGE_SIZE_4K,
> >   			.offset = 0,
> >   		},
> >   		{
> > @@ -864,7 +861,7 @@ static int igt_mock_ppgtt_64K(void *arg)
> >   		},
> >   		{
> >   			.size = SZ_2M - SZ_4K,
> > -			.gtt = I915_GTT_PAGE_SIZE_4K,
> > +			.gtt = I915_GTT_PAGE_SIZE_64K | I915_GTT_PAGE_SIZE_4K,
> >   			.offset = 0,
> >   		},
> >   		{
> > @@ -886,12 +883,12 @@ static int igt_mock_ppgtt_64K(void *arg)
> >   		{
> >   			.size = SZ_64K,
> >   			.offset = SZ_2M,
> > -			.gtt = I915_GTT_PAGE_SIZE_4K,
> > +			.gtt = I915_GTT_PAGE_SIZE_64K,
> >   		},
> >   		{
> >   			.size = SZ_128K,
> >   			.offset = SZ_2M - SZ_64K,
> > -			.gtt = I915_GTT_PAGE_SIZE_4K,
> > +			.gtt = I915_GTT_PAGE_SIZE_64K,
> >   		},
> 
> Did you consider the suggestion with possibly making this a live test 
> instead?
> 
> The first comment in igt_mock_ppgtt_64K() describing the test is:
> 
> /*
>   * Sanity check some of the trickiness with 64K pages -- either we can
>   * safely mark the whole page-table(2M block) as 64K, or we have to
>   * always fallback to 4K.
>   */
> 
> That doesn't really apply to the new 64K GTT model it seems (which is 
> why it now fails), so trying to adjust the test just because the mock 
> device underneath is now using the newer model doesn't seem correct to 
> me. If we instead make it a live test and only run it on devices with 
> the old 64K GTT model, then we still retain the same test coverage. 
> Alternatively, we could potentially run on both HW models with slightly 
> different test expectations. IMO the test is too HW specific for a mock 
> test.
> 
> >   	};
> >   	struct i915_vma *vma;
> > diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> > index 4daaa6f55668..9c571185395f 100644
> > --- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> > +++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
> > @@ -570,6 +570,7 @@ xehpsdv_ppgtt_insert_huge(struct i915_address_space *vm,
> >   			}
> >   		} while (rem >= page_size && index < max);
> >   
> > +		drm_clflush_virt_range(vaddr, PAGE_SIZE);
> >   		vma_res->page_sizes_gtt |= page_size;
> >   	} while (iter->sg && sg_dma_len(iter->sg));
> >   }
> > @@ -707,7 +708,7 @@ static void gen8_ppgtt_insert(struct i915_address_space *vm,
> >   	struct sgt_dma iter = sgt_dma(vma_res);
> >   
> >   	if (vma_res->bi.page_sizes.sg > I915_GTT_PAGE_SIZE) {
> > -		if (HAS_64K_PAGES(vm->i915))
> > +		if (GRAPHICS_VER_FULL(vm->i915) >= IP_VER(12, 50))
> >   			xehpsdv_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);
> >   		else
> >   			gen8_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);
> 




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux