+ z3fold-avoid-subtle-race-when-freeing-slots.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: z3fold: avoid subtle race when freeing slots
has been added to the -mm tree.  Its filename is
     z3fold-avoid-subtle-race-when-freeing-slots.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/z3fold-avoid-subtle-race-when-freeing-slots.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/z3fold-avoid-subtle-race-when-freeing-slots.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Vitaly Wool <vitalywool@xxxxxxxxx>
Subject: z3fold: avoid subtle race when freeing slots

There is a subtle race between freeing slots and setting the last slot to
zero since the OPRPHANED flag was set after the rwlock had been released. 
Fix that to avoid rare memory leaks caused by this race.

Link: http://lkml.kernel.org/r/20191127152118.6314b99074b0626d4c5a8835@xxxxxxxxx
Signed-off-by: Vitaly Wool <vitaly.vul@xxxxxxxx>
Cc: Dan Streetman <ddstreet@xxxxxxxx>
Cc: Henry Burns <henrywolfeburns@xxxxxxxxx>
Cc: Shakeel Butt <shakeelb@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/z3fold.c |   10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

--- a/mm/z3fold.c~z3fold-avoid-subtle-race-when-freeing-slots
+++ a/mm/z3fold.c
@@ -327,6 +327,10 @@ static inline void free_handle(unsigned
 	zhdr->foreign_handles--;
 	is_free = true;
 	read_lock(&slots->lock);
+	if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {
+		read_unlock(&slots->lock);
+		return;
+	}
 	for (i = 0; i <= BUDDY_MASK; i++) {
 		if (slots->slot[i]) {
 			is_free = false;
@@ -335,7 +339,7 @@ static inline void free_handle(unsigned
 	}
 	read_unlock(&slots->lock);
 
-	if (is_free && test_and_clear_bit(HANDLES_ORPHANED, &slots->pool)) {
+	if (is_free) {
 		struct z3fold_pool *pool = slots_to_pool(slots);
 
 		kmem_cache_free(pool->c_handle, slots);
@@ -531,12 +535,12 @@ static void __release_z3fold_page(struct
 			break;
 		}
 	}
+	if (!is_free)
+		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
 	read_unlock(&zhdr->slots->lock);
 
 	if (is_free)
 		kmem_cache_free(pool->c_handle, zhdr->slots);
-	else
-		set_bit(HANDLES_ORPHANED, &zhdr->slots->pool);
 
 	if (locked)
 		z3fold_page_unlock(zhdr);
_

Patches currently in -mm which might be from vitalywool@xxxxxxxxx are

z3fold-avoid-subtle-race-when-freeing-slots.patch
z3fold-compact-objects-more-accurately.patch
z3fold-protect-handle-reads.patch




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux