[PATCH] hugetlb: fix locking in region_add,region_cgh,allocate_file_region_entries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



commit 0db9d74ed884  ("hugetlb: disable region_add file_region coalescing")
introduced issues with regards to locking:

- Missing spin_unlock in hugetlb.c:region_add and hugetlb.c:region_cgh
  when returning after an error.
- Missing spin lock  in hugetlb.c:allocate_file_region_entries
  when returning after an error.

The first two errors were spotted using the coccinelle static code
analysis tool while focusing on the mini_lock.cocci script,
whose goal is to find missing unlocks.

make coccicheck mode=REPORT m=mm/hugetlb.c

    mm/hugetlb.c:514:3-9: preceding lock on line 487
    mm/hugetlb.c:564:2-8: preceding lock on line 554

The third instance spotted by manual examination.

In static long region_add(...) and static long region_cgh(...) , releasing
the acquired lock when exiting via their error path was removed.
This will cause these functions to return with a lock held if they do not
succeed.

This patch reintroduces the original error path making sure the lock is
properly released on all exits.

A a new function allocate_file_region_entries was also introduced  that
must be called with a lock held and returned with the lock held.
However the lock is temporarily released during processing but will not
be reacquired on error.

This patch ensures that the lock will be reacquired in the error path also.

Fixes: 0db9d74ed884  ("hugetlb: disable region_add file_region coalescing")
Link: https://lists.elisa.tech/g/development-process/message/289
Signed-off-by: Laurent Cremmer <laurent@xxxxxxxxxxxxxxxxxx>
Reviewed-by: Oliver Hartkopp <hartkopp@xxxxxxxxxxxxxxxxxx>
---
 mm/hugetlb.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index fe76f8fd5a73..92bea6f77361 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -438,7 +438,7 @@ static int allocate_file_region_entries(struct resv_map *resv,
 		for (i = 0; i < to_allocate; i++) {
 			trg = kmalloc(sizeof(*trg), GFP_KERNEL);
 			if (!trg)
-				goto out_of_memory;
+				goto out_of_memory_unlocked;
 			list_add(&trg->link, &allocated_regions);
 		}
 
@@ -450,7 +450,8 @@ static int allocate_file_region_entries(struct resv_map *resv,
 
 	return 0;
 
-out_of_memory:
+out_of_memory_unlocked:
+	spin_lock(&resv->lock);
 	list_for_each_entry_safe(rg, trg, &allocated_regions, link) {
 		list_del(&rg->link);
 		kfree(rg);
@@ -508,7 +509,8 @@ static long region_add(struct resv_map *resv, long f, long t,
 
 		if (allocate_file_region_entries(
 			    resv, actual_regions_needed - in_regions_needed)) {
-			return -ENOMEM;
+			add = -ENOMEM;
+			goto out_locked;
 		}
 
 		goto retry;
@@ -517,7 +519,7 @@ static long region_add(struct resv_map *resv, long f, long t,
 	add = add_reservation_in_range(resv, f, t, h_cg, h, NULL);
 
 	resv->adds_in_progress -= in_regions_needed;
-
+out_locked:
 	spin_unlock(&resv->lock);
 	VM_BUG_ON(add < 0);
 	return add;
@@ -557,11 +559,14 @@ static long region_chg(struct resv_map *resv, long f, long t,
 	if (*out_regions_needed == 0)
 		*out_regions_needed = 1;
 
-	if (allocate_file_region_entries(resv, *out_regions_needed))
-		return -ENOMEM;
+	if (allocate_file_region_entries(resv, *out_regions_needed)) {
+		chg = -ENOMEM;
+		goto out_locked;
+	}
 
 	resv->adds_in_progress += *out_regions_needed;
 
+out_locked:
 	spin_unlock(&resv->lock);
 	return chg;
 }
-- 
2.17.1





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux