We should always be explicit and allocate a fence slot before adding a new fence. Signed-off-by: Matthew Auld <matthew.auld@xxxxxxxxx> Cc: Thomas Hellström <thomas.hellstrom@xxxxxxxxxxxxxxx> Cc: Lionel Landwerlin <lionel.g.landwerlin@xxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> Cc: Jon Bloomfield <jon.bloomfield@xxxxxxxxx> Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> Cc: Jordan Justen <jordan.l.justen@xxxxxxxxx> Cc: Kenneth Graunke <kenneth@xxxxxxxxxxxxx> Cc: Akeem G Abodunrin <akeem.g.abodunrin@xxxxxxxxx> Reviewed-by: Thomas Hellström <thomas.hellstrom@xxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c index 388c85b0f764..da28acb78a88 100644 --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c @@ -1224,8 +1224,10 @@ static int __igt_mmap_migrate(struct intel_memory_region **placements, expand32(POISON_INUSE), &rq); i915_gem_object_unpin_pages(obj); if (rq) { - dma_resv_add_fence(obj->base.resv, &rq->fence, - DMA_RESV_USAGE_KERNEL); + err = dma_resv_reserve_fences(obj->base.resv, 1); + if (!err) + dma_resv_add_fence(obj->base.resv, &rq->fence, + DMA_RESV_USAGE_KERNEL); i915_request_put(rq); } i915_gem_object_unlock(obj); -- 2.36.1