The patch titled Subject: z3fold: fix page locking in z3fold_alloc() has been added to the -mm tree. Its filename is z3fold-fix-page-locking-in-z3fold_alloc.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/z3fold-fix-page-locking-in-z3fold_alloc.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/z3fold-fix-page-locking-in-z3fold_alloc.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vitaly Wool <vitalywool@xxxxxxxxx> Subject: z3fold: fix page locking in z3fold_alloc() Stress testing of the current z3fold implementation on a 8-core system revealed it was possible that a z3fold page deleted from its unbuddied list in z3fold_alloc() would be put on another unbuddied list by z3fold_free() while z3fold_alloc() is still processing it. This has been introduced with commit 5a27aa822 ("z3fold: add kref refcounting") due to the removal of special handling of a z3fold page not on any list in z3fold_free(). To fix this, the z3fold page lock should be taken in z3fold_alloc() before the pool lock is released. To avoid deadlocking, we just try to lock the page as soon as we get a hold of it, and if trylock fails, we drop this page and take the next one. Signed-off-by: Vitaly Wool <vitalywool@xxxxxxxxx> Cc: Dan Streetman <ddstreet@xxxxxxxx> Cc: <Oleksiy.Avramchenko@xxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/z3fold.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff -puN mm/z3fold.c~z3fold-fix-page-locking-in-z3fold_alloc mm/z3fold.c --- a/mm/z3fold.c~z3fold-fix-page-locking-in-z3fold_alloc +++ a/mm/z3fold.c @@ -185,6 +185,12 @@ static inline void z3fold_page_lock(stru spin_lock(&zhdr->page_lock); } +/* Try to lock a z3fold page */ +static inline int z3fold_page_trylock(struct z3fold_header *zhdr) +{ + return spin_trylock(&zhdr->page_lock); +} + /* Unlock a z3fold page */ static inline void z3fold_page_unlock(struct z3fold_header *zhdr) { @@ -385,7 +391,7 @@ static int z3fold_alloc(struct z3fold_po spin_lock(&pool->lock); zhdr = list_first_entry_or_null(&pool->unbuddied[i], struct z3fold_header, buddy); - if (!zhdr) { + if (!zhdr || !z3fold_page_trylock(zhdr)) { spin_unlock(&pool->lock); continue; } @@ -394,7 +400,6 @@ static int z3fold_alloc(struct z3fold_po spin_unlock(&pool->lock); page = virt_to_page(zhdr); - z3fold_page_lock(zhdr); if (zhdr->first_chunks == 0) { if (zhdr->middle_chunks != 0 && chunks >= zhdr->start_middle) _ Patches currently in -mm which might be from vitalywool@xxxxxxxxx are z3fold-fix-page-locking-in-z3fold_alloc.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html