On 17/02/2023 19:18, Jonathan Cavitt wrote:
MTL currently uses gen8_ppgtt_insert_huge when managing huge pages. This is because
MTL reports as not supporting 64K pages, or more accurately, the system that reports
whether a platform has 64K pages reports false for MTL. This is only half correct,
as the 64K page support reporting system only cares about 64K page support for LMEM,
which MTL doesn't have.
MTL should be using xehpsdv_ppgtt_insert_huge. However, simply changing over to
using that manager doesn't resolve the issue because MTL is expecting the virtual
address space for the page table to be flushed after initialization, so we must also
add a flush statement there.
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@xxxxxxxxx>
Reviewed-by: Matthew Auld <matthew.auld@xxxxxxxxx>
Although it looks like the hugepage mock tests are failing with this. I
assume the mock device just uses some "max" gen version or so, which now
triggers this path. Any ideas for that?
---
drivers/gpu/drm/i915/gt/gen8_ppgtt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
index 4daaa6f55668..9c571185395f 100644
--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
@@ -570,6 +570,7 @@ xehpsdv_ppgtt_insert_huge(struct i915_address_space *vm,
}
} while (rem >= page_size && index < max);
+ drm_clflush_virt_range(vaddr, PAGE_SIZE);
vma_res->page_sizes_gtt |= page_size;
} while (iter->sg && sg_dma_len(iter->sg));
}
@@ -707,7 +708,7 @@ static void gen8_ppgtt_insert(struct i915_address_space *vm,
struct sgt_dma iter = sgt_dma(vma_res);
if (vma_res->bi.page_sizes.sg > I915_GTT_PAGE_SIZE) {
- if (HAS_64K_PAGES(vm->i915))
+ if (GRAPHICS_VER_FULL(vm->i915) >= IP_VER(12, 50))
xehpsdv_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);
else
gen8_ppgtt_insert_huge(vm, vma_res, &iter, cache_level, flags);