Am 09.02.22 um 11:09 schrieb Matthew Auld:
On Wed, 9 Feb 2022 at 08:41, Christian König
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
This way we finally fix the problem that new resource are
not immediately evict-able after allocation.
That has caused numerous problems including OOM on GDS handling
and not being able to use TTM as general resource manager.
v2: stop assuming in ttm_resource_fini that res->bo is still valid.
v3: cleanup kerneldoc, add more lockdep annotation
Signed-off-by: Christian König <christian.koenig@xxxxxxx>
Tested-by: Bas Nieuwenhuizen <bas@xxxxxxxxxxxxxxxxxxx>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 8 +-
drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 2 +-
drivers/gpu/drm/ttm/ttm_bo.c | 115 ++--------------------
drivers/gpu/drm/ttm/ttm_bo_util.c | 1 -
drivers/gpu/drm/ttm/ttm_device.c | 64 ++++++-------
drivers/gpu/drm/ttm/ttm_resource.c | 122 +++++++++++++++++++++++-
include/drm/ttm/ttm_bo_api.h | 16 ----
include/drm/ttm/ttm_bo_driver.h | 29 +-----
include/drm/ttm/ttm_resource.h | 35 +++++++
9 files changed, 197 insertions(+), 195 deletions(-)
<snip>
/**
* ttm_resource_init - resource object constructure
* @bo: buffer object this resources is allocated for
@@ -52,10 +156,12 @@ void ttm_resource_init(struct ttm_buffer_object *bo,
res->bus.is_iomem = false;
res->bus.caching = ttm_cached;
res->bo = bo;
+ INIT_LIST_HEAD(&res->lru);
man = ttm_manager_type(bo->bdev, place->mem_type);
spin_lock(&bo->bdev->lru_lock);
man->usage += bo->base.size;
+ ttm_resource_move_to_lru_tail(res, NULL);
spin_unlock(&bo->bdev->lru_lock);
}
EXPORT_SYMBOL(ttm_resource_init);
@@ -66,15 +172,21 @@ EXPORT_SYMBOL(ttm_resource_init);
* @res: the resource to clean up
*
* Should be used by resource manager backends to clean up the TTM resource
- * objects before freeing the underlying structure. Counterpart of
- * &ttm_resource_init
+ * objects before freeing the underlying structure. Makes sure the resource is
+ * removed from the LRU before destruction.
+ * Counterpart of &ttm_resource_init.
*/
void ttm_resource_fini(struct ttm_resource_manager *man,
struct ttm_resource *res)
{
- spin_lock(&man->bdev->lru_lock);
- man->usage -= res->bo->base.size;
- spin_unlock(&man->bdev->lru_lock);
+ struct ttm_device *bdev = man->bdev;
+
+ spin_lock(&bdev->lru_lock);
+ list_del_init(&res->lru);
+ if (res->bo && bdev->funcs->del_from_lru_notify)
+ bdev->funcs->del_from_lru_notify(res->bo);
+ man->usage -= res->num_pages << PAGE_SHIFT;
Above we are using the bo->base.size for tracking usage, but here we
are using num_pages. Is it guaranteed that bo->base.size is always
page aligned? Do we need some kind of WARN_ON()? Perhaps also sanity
checking that usage == 0 when tearing down the man?
Good point, going to add that and consistently using res->num_pages for
both.
Thanks,
Christian.