On Tue, Oct 22, 2024 at 09:43:35AM +0100, Matthew Auld wrote: > On 21/10/2024 22:18, Matthew Brost wrote: > > Don't open code vmap of a BO, use ttm_bo_access helper which is safe for > > non-contiguous BOs and non-visible BOs. > > > > Suggested-by: Matthew Auld <matthew.auld@xxxxxxxxx> > > Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx> > > I guess needs fixes tag? > I don't know enough about display but it is possible / likely that FB BOs met the requirements that ttm_bo_kmap worked. Regardless safe this way and I suppose I'll add a fixes tag here. Matt > With that, > Reviewed-by: Matthew Auld <matthew.auld@xxxxxxxxx> > > > --- > > drivers/gpu/drm/xe/display/intel_bo.c | 25 +------------------------ > > 1 file changed, 1 insertion(+), 24 deletions(-) > > > > diff --git a/drivers/gpu/drm/xe/display/intel_bo.c b/drivers/gpu/drm/xe/display/intel_bo.c > > index 9f54fad0f1c0..43141964f6f2 100644 > > --- a/drivers/gpu/drm/xe/display/intel_bo.c > > +++ b/drivers/gpu/drm/xe/display/intel_bo.c > > @@ -40,31 +40,8 @@ int intel_bo_fb_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) > > int intel_bo_read_from_page(struct drm_gem_object *obj, u64 offset, void *dst, int size) > > { > > struct xe_bo *bo = gem_to_xe_bo(obj); > > - struct ttm_bo_kmap_obj map; > > - void *src; > > - bool is_iomem; > > - int ret; > > - ret = xe_bo_lock(bo, true); > > - if (ret) > > - return ret; > > - > > - ret = ttm_bo_kmap(&bo->ttm, offset >> PAGE_SHIFT, 1, &map); > > - if (ret) > > - goto out_unlock; > > - > > - offset &= ~PAGE_MASK; > > - src = ttm_kmap_obj_virtual(&map, &is_iomem); > > - src += offset; > > - if (is_iomem) > > - memcpy_fromio(dst, (void __iomem *)src, size); > > - else > > - memcpy(dst, src, size); > > - > > - ttm_bo_kunmap(&map); > > -out_unlock: > > - xe_bo_unlock(bo); > > - return ret; > > + return ttm_bo_access(&bo->ttm, offset, dst, size, 0); > > } > > struct intel_frontbuffer *intel_bo_get_frontbuffer(struct drm_gem_object *obj)