As paranoia, we want to ensure that the CPU's PTEs have been revoked for the object before we return from i915_gem_release_mmap(). This allows us to rely on there being no outstanding memory accesses and guarantees serialisation of the code against concurrent access just by calling i915_gem_release_mmap(). v2: Reduce the mb() into a wmb() following the revoke. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxxxxxxxx> Cc: "Goel, Akash" <akash.goel@xxxxxxxxx Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> --- drivers/gpu/drm/i915/i915_gem.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 6c60e04fc09c..3ab529669448 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -1962,11 +1962,21 @@ out: void i915_gem_release_mmap(struct drm_i915_gem_object *obj) { + /* Serialisation between user GTT access and our code depends upon + * revoking the CPU's PTE whilst the mutex is held. The next user + * pagefault then has to wait until we release the mutex. + */ + lockdep_assert_held(&obj->base.dev->struct_mutex); + if (!obj->fault_mappable) return; drm_vma_node_unmap(&obj->base.vma_node, obj->base.dev->anon_inode->i_mapping); + + /* Ensure that the CPU's PTE are revoked before we return */ + wmb(); + obj->fault_mappable = false; } @@ -3269,9 +3279,6 @@ static void i915_gem_object_finish_gtt(struct drm_i915_gem_object *obj) if ((obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0) return; - /* Wait for any direct GTT access to complete */ - mb(); - old_read_domains = obj->base.read_domains; old_write_domain = obj->base.write_domain; -- 2.7.0.rc3 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx