On Sat, Jun 15, 2019 at 07:14:35AM -0700, Christoph Hellwig wrote: > > mutex_lock(&hmm->lock); > > - list_for_each_entry(range, &hmm->ranges, list) > > - range->valid = false; > > - wake_up_all(&hmm->wq); > > + /* > > + * Since hmm_range_register() holds the mmget() lock hmm_release() is > > + * prevented as long as a range exists. > > + */ > > + WARN_ON(!list_empty(&hmm->ranges)); > > mutex_unlock(&hmm->lock); > > This can just use list_empty_careful and avoid the lock entirely. Sure, it is just a debugging helper and the mmput should serialize thinigs enough to be reliable. I had to move the RCU patch ahead of this. Thanks diff --git a/mm/hmm.c b/mm/hmm.c index a9ace28984ea42..1eddda45cefae7 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -124,13 +124,11 @@ static void hmm_release(struct mmu_notifier *mn, struct mm_struct *mm) if (!kref_get_unless_zero(&hmm->kref)) return; - mutex_lock(&hmm->lock); /* * Since hmm_range_register() holds the mmget() lock hmm_release() is * prevented as long as a range exists. */ - WARN_ON(!list_empty(&hmm->ranges)); - mutex_unlock(&hmm->lock); + WARN_ON(!list_empty_careful(&hmm->ranges)); down_write(&hmm->mirrors_sem); mirror = list_first_entry_or_null(&hmm->mirrors, struct hmm_mirror, @@ -938,7 +936,7 @@ void hmm_range_unregister(struct hmm_range *range) return; mutex_lock(&hmm->lock); - list_del(&range->list); + list_del_init(&range->list); mutex_unlock(&hmm->lock); /* Drop reference taken by hmm_range_register() */