[PATCH 1/3] z3fold: avoid subtle race when freeing slots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There is a subtle race between freeing slots and setting the last
slot to zero since the OPRPHANED flag was set after the rwlock
had been released. Fix that to avoid rare memory leaks caused by
this race.

Signed-off-by: Vitaly Wool <vitaly.vul@xxxxxxxx>
---
 mm/z3fold.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/z3fold.c b/mm/z3fold.c
index d48d0ec3bcdd..36bd2612f609 100644
--- a/mm/z3fold.c
+++ b/mm/z3fold.c
@@ -327,6 +327,10 @@ static inline void free_handle(unsigned long handle)
 	zhdr->foreign_handles--;
 	is_free = true;
 	read_lock(&slots->lock);
+	if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {
+		read_unlock(&slots->lock);
+		return;
+	}
 	for (i = 0; i <= BUDDY_MASK; i++) {
 		if (slots->slot[i]) {
 			is_free = false;
@@ -335,7 +339,7 @@ static inline void free_handle(unsigned long handle)
 	}
 	read_unlock(&slots->lock);
 
-	if (is_free && test_and_clear_bit(HANDLES_ORPHANED, &slots->pool)) {
+	if (is_free) {
 		struct z3fold_pool *pool = slots_to_pool(slots);
 
 		kmem_cache_free(pool->c_handle, slots);
@@ -531,12 +535,12 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked)
 			break;
 		}
 	}
+	if (!is_free)
+		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
 	read_unlock(&zhdr->slots->lock);
 
 	if (is_free)
 		kmem_cache_free(pool->c_handle, zhdr->slots);
-	else
-		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
 
 	if (locked)
 		z3fold_page_unlock(zhdr);
-- 
2.17.1




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux