On 10/12/20 4:46 PM, Maarten Lankhorst wrote:
Instead of doing what we do currently, which will never work with
PROVE_LOCKING, do the same as AMD does, and something similar to
relocation slowpath. When all locks are dropped, we acquire the
pages for pinning. When the locks are taken, we transfer those
pages in .get_pages() to the bo. As a final check before installing
the fences, we ensure that the mmu notifier was not called; if it is,
we return -EAGAIN to userspace to signal it has to start over.
Changes since v1:
- Unbinding is done in submit_init only. submit_begin() removed.
Perhaps an early check for invalidation instead of submit_init?
- MMU_NOTFIER -> MMU_NOTIFIER
...
- ret = i915_gem_object_unbind(obj,
- I915_GEM_OBJECT_UNBIND_ACTIVE |
- I915_GEM_OBJECT_UNBIND_BARRIER);
- if (ret == 0)
- ret = __i915_gem_object_put_pages(obj);
- i915_gem_object_put(obj);
- if (ret)
- return ret;
+ mutex_lock(&i915->mm.notifier_lock);
- spin_lock(&mn->lock);
+ mmu_interval_set_seq(mni, cur_seq);
- /*
- * As we do not (yet) protect the mmu from concurrent insertion
- * over this range, there is no guarantee that this search will
- * terminate given a pathologic workload.
- */
- it = interval_tree_iter_first(&mn->objects, range->start, end);
- }
- spin_unlock(&mn->lock);
+ mutex_unlock(&i915->mm.notifier_lock);
Could we perhaps make use of the fact that we attach the fence before
checking for invalidation and thus ditch the per mm mm.notifier_lock
completely, that is
execbuf:
attach_fence();
smp_mb();
check_seqno_for_invalidation();
notifier:
set_seqno_for_invalidation();
smp_mb();
wait_for_fence().
The other stuff protected by that lock appears to be per object only and
could be kept under a per object spinlock or perhaps mutex.
obj->userptr.lock?
/Thomas
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx