On 02/12/2024 11:43, Jani Nikula wrote:
On Tue, 26 Nov 2024, Matthew Brost <matthew.brost@xxxxxxxxx> wrote:
Don't open code vmap of a BO, use ttm_bo_access helper which is safe for
non-contiguous BOs and non-visible BOs.
Suggested-by: Matthew Auld <matthew.auld@xxxxxxxxx>
Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx>
Reviewed-by: Matthew Auld <matthew.auld@xxxxxxxxx>
I've seen a few cases of [1] lately, and Thomas tipped me off to this
change. We get:
<4> [374.262965] xe 0000:03:00.0: [drm] drm_WARN_ON(ret)
<4> [374.262983] WARNING: CPU: 8 PID: 5462 at drivers/gpu/drm/i915/display/intel_display.c:7637 intel_atomic_commit_tail+0x16c7/0x17f0 [xe]
and that's intel_atomic_prepare_plane_clear_colors():
ret = intel_bo_read_from_page(intel_fb_bo(fb),
fb->offsets[cc_plane] + 16,
&plane_state->ccval,
sizeof(plane_state->ccval));
/* The above could only fail if the FB obj has an unexpected backing store type. */
drm_WARN_ON(&i915->drm, ret);
So I don't have any conclusive evidence, but could this be the reason?
@@ -40,8 +40,13 @@ int intel_bo_fb_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
int intel_bo_read_from_page(struct drm_gem_object *obj, u64 offset,
void *dst, int size)
{
struct xe_bo *bo = gem_to_xe_bo(obj);
+ int ret;
- return ttm_bo_access(&bo->ttm, offset, dst, size, 0);
+ ret = ttm_bo_access(&bo->ttm, offset, dst, size, 0);
+ if (ret == size)
+ ret = 0;
+
+ return ret;
}
I think we somehow missed that bo_access is returning @size on success?
BR,
Jani.
[1] https://intel-gfx-ci.01.org/tree/intel-xe/xe-pw-138070v8/shard-dg2-434/igt@kms_flip_tiling@flip-change-tiling@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
---
drivers/gpu/drm/xe/display/intel_bo.c | 25 +------------------------
1 file changed, 1 insertion(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/xe/display/intel_bo.c b/drivers/gpu/drm/xe/display/intel_bo.c
index 9f54fad0f1c0..43141964f6f2 100644
--- a/drivers/gpu/drm/xe/display/intel_bo.c
+++ b/drivers/gpu/drm/xe/display/intel_bo.c
@@ -40,31 +40,8 @@ int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
int intel_bo_read_from_page(struct drm_gem_object *obj, u64 offset, void *dst, int size)
{
struct xe_bo *bo = gem_to_xe_bo(obj);
- struct ttm_bo_kmap_obj map;
- void *src;
- bool is_iomem;
- int ret;
- ret = xe_bo_lock(bo, true);
- if (ret)
- return ret;
-
- ret = ttm_bo_kmap(&bo->ttm, offset >> PAGE_SHIFT, 1, &map);
- if (ret)
- goto out_unlock;
-
- offset &= ~PAGE_MASK;
- src = ttm_kmap_obj_virtual(&map, &is_iomem);
- src += offset;
- if (is_iomem)
- memcpy_fromio(dst, (void __iomem *)src, size);
- else
- memcpy(dst, src, size);
-
- ttm_bo_kunmap(&map);
-out_unlock:
- xe_bo_unlock(bo);
- return ret;
+ return ttm_bo_access(&bo->ttm, offset, dst, size, 0);
}
struct intel_frontbuffer *intel_bo_get_frontbuffer(struct drm_gem_object *obj)