From: Russell King <rmk+kernel@xxxxxxxxxxxxxxxx> Reduce the exposure of struct_mutex to an absolute minimum. We only need to take this lock over the call to etnaviv_gem_get_pages() as vm_insert_page() already does a properly synchronised update of the PTE (using the pte table lock.) Reducing the code covered by this lock allows for more parallel execution in SMP environments. Signed-off-by: Russell King <rmk+kernel@xxxxxxxxxxxxxxxx> --- drivers/staging/etnaviv/etnaviv_gem.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/staging/etnaviv/etnaviv_gem.c b/drivers/staging/etnaviv/etnaviv_gem.c index 69360c899b78..522e5a6c3612 100644 --- a/drivers/staging/etnaviv/etnaviv_gem.c +++ b/drivers/staging/etnaviv/etnaviv_gem.c @@ -221,8 +221,10 @@ int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) pgoff_t pgoff; int ret; - /* Make sure we don't parallel update on a fault, nor move or remove - * something from beneath our feet + /* + * Make sure we don't parallel update on a fault, nor move or remove + * something from beneath our feet. Note that vm_insert_page() is + * specifically coded to take care of this, so we don't have to. */ ret = mutex_lock_interruptible(&dev->struct_mutex); if (ret) @@ -230,9 +232,11 @@ int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) /* make sure we have pages attached now */ pages = etnaviv_gem_get_pages(to_etnaviv_bo(obj)); + mutex_unlock(&dev->struct_mutex); + if (IS_ERR(pages)) { ret = PTR_ERR(pages); - goto out_unlock; + goto out; } /* We don't use vmf->pgoff since that has the fake offset: */ @@ -246,8 +250,6 @@ int etnaviv_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) ret = vm_insert_page(vma, (unsigned long)vmf->virtual_address, page); -out_unlock: - mutex_unlock(&dev->struct_mutex); out: switch (ret) { case -EAGAIN: -- 2.5.1 _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel