On 11/04/2010 01:03 AM, Ben Skeggs wrote:
From: Ben Skeggs<bskeggs@xxxxxxxxxx>
If the driver kmaps an object userspace is expecting to be mapped, the
unmap would have called down into the drivers io_unreserve() function
and potentially unmapped the pages from its BARs (for example) and they'd
no longer be accessible for the userspace mapping.
Signed-off-by: Ben Skeggs<bskeggs@xxxxxxxxxx>
---
drivers/gpu/drm/ttm/ttm_bo_util.c | 14 ++++++++++----
include/drm/ttm/ttm_bo_api.h | 1 +
2 files changed, 11 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index ff358ad..e9dbe8b 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -467,9 +467,12 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
if (num_pages> 1&& !DRM_SUSER(DRM_CURPROC))
return -EPERM;
#endif
- ret = ttm_mem_io_reserve(bo->bdev,&bo->mem);
- if (ret)
- return ret;
+ if (!bo->mem.bus.io_reserved) {
+ ret = ttm_mem_io_reserve(bo->bdev,&bo->mem);
+ if (ret)
+ return ret;
+ map->io_reserved = true;
+ }
if (!bo->mem.bus.is_iomem) {
return ttm_bo_kmap_ttm(bo, start_page, num_pages, map);
} else {
@@ -487,7 +490,10 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
switch (map->bo_kmap_type) {
case ttm_bo_map_iomap:
iounmap(map->virtual);
- ttm_mem_io_free(map->bo->bdev,&map->bo->mem);
+ if (map->io_reserved) {
+ ttm_mem_io_free(map->bo->bdev,&map->bo->mem);
+ map->io_reserved = false;
+ }
break;
case ttm_bo_map_vmap:
vunmap(map->virtual);
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 5afa5b5..ce998ac 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -300,6 +300,7 @@ struct ttm_bo_kmap_obj {
ttm_bo_map_premapped = 4 | TTM_BO_MAP_IOMEM_MASK,
} bo_kmap_type;
struct ttm_buffer_object *bo;
+ bool io_reserved;
};
/**
This doesn't solve the problem unfortunately. Consider the sequence
kmap->io_mem_reserve
fault()->
kunmap->io_mem_free
user_space_access()-> Invalid.
I think this needs to be fixed by us maintaining an
mem:io_reserved_count, where all user-space triggered io_reserves count
as 1. A mem::user_space_io_reserved flag could be protected by the
bo::reserve lock, whereas a reserved_count can't, since strictly you're
allowed to kmap a bo without reserving it, but only if it's pinned
/Thomas
.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel