On 02/11/2023 16:58, Andrzej Hajda wrote:
On 02.11.2023 17:06, Radhakrishna Sripada wrote:
Experiments were conducted with different multipliers to VTD_GUARD macro
with multiplier of 185 we were observing occasional pipe faults when
running kms_cursor_legacy --run-subtest single-bo
There could possibly be an underlying issue that is being
investigated, for
now bump the guard pages for MTL.
Closes: https://gitlab.freedesktop.org/drm/intel/-/issues/2017
Cc: Gustavo Sousa <gustavo.sousa@xxxxxxxxx>
Cc: Chris Wilson <chris.p.wilson@xxxxxxxxxxxxxxx>
Signed-off-by: Radhakrishna Sripada <radhakrishna.sripada@xxxxxxxxx>
---
drivers/gpu/drm/i915/gem/i915_gem_domain.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
index 3770828f2eaf..b65f84c6bb3f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
@@ -456,6 +456,9 @@ i915_gem_object_pin_to_display_plane(struct
drm_i915_gem_object *obj,
if (intel_scanout_needs_vtd_wa(i915)) {
unsigned int guard = VTD_GUARD;
+ if (IS_METEORLAKE(i915))
+ guard *= 200;
+
200 * VTD_GUARD = 200 * 168 * 4K = 131MB
Looks insanely high, 131MB for padding, if this is before and after it
becomes even 262MB of wasted address per plane. Just signalling, I do
not know if this actually hurts.
Yeah this feels crazy. There must be some other explanation which is
getting hidden by the crazy amount of padding so I'd rather we figured
it out.
With 262MiB per fb how many fit in GGTT before eviction hits? N screens
with double/triple buffering?
Regards,
Tvrtko
P.S. Where did the 185 from the commit message come from?