The patch titled Subject: z3fold: compact objects more accurately has been added to the -mm tree. Its filename is z3fold-compact-objects-more-accurately.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/z3fold-compact-objects-more-accurately.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/z3fold-compact-objects-more-accurately.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Vitaly Wool <vitalywool@xxxxxxxxx> Subject: z3fold: compact objects more accurately There are several small things to be considered regarding the new inter-page compaction mechanism. First, we better set the relevant size in chunks to 0 in the old z3fold header for the object that has been moved to another z3fold page. Then, we shouldn't do inter-page compaction if an object is mapped. Lastly, free_handle should happen before release_z3fold_page (but not in case the page is under reclaim, it will the handle will be freed by reclaim then). This patch addresses all three issues. Link: http://lkml.kernel.org/r/20191127152216.6ad33745a21ba71c53606acb@xxxxxxxxx Signed-off-by: Vitaly Wool <vitaly.vul@xxxxxxxx> Cc: Dan Streetman <ddstreet@xxxxxxxx> Cc: Henry Burns <henrywolfeburns@xxxxxxxxx> Cc: Shakeel Butt <shakeelb@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/z3fold.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/mm/z3fold.c~z3fold-compact-objects-more-accurately +++ a/mm/z3fold.c @@ -670,6 +670,7 @@ static struct z3fold_header *compact_sin int first_idx = __idx(zhdr, FIRST); int middle_idx = __idx(zhdr, MIDDLE); int last_idx = __idx(zhdr, LAST); + unsigned short *moved_chunks = NULL; /* * No need to protect slots here -- all the slots are "local" and @@ -679,14 +680,17 @@ static struct z3fold_header *compact_sin p += ZHDR_SIZE_ALIGNED; sz = zhdr->first_chunks << CHUNK_SHIFT; old_handle = (unsigned long)&zhdr->slots->slot[first_idx]; + moved_chunks = &zhdr->first_chunks; } else if (zhdr->middle_chunks && zhdr->slots->slot[middle_idx]) { p += zhdr->start_middle << CHUNK_SHIFT; sz = zhdr->middle_chunks << CHUNK_SHIFT; old_handle = (unsigned long)&zhdr->slots->slot[middle_idx]; + moved_chunks = &zhdr->middle_chunks; } else if (zhdr->last_chunks && zhdr->slots->slot[last_idx]) { p += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT); sz = zhdr->last_chunks << CHUNK_SHIFT; old_handle = (unsigned long)&zhdr->slots->slot[last_idx]; + moved_chunks = &zhdr->last_chunks; } if (sz > 0) { @@ -743,6 +747,8 @@ static struct z3fold_header *compact_sin write_unlock(&zhdr->slots->lock); add_to_unbuddied(pool, new_zhdr); z3fold_page_unlock(new_zhdr); + + *moved_chunks = 0; } return new_zhdr; @@ -840,7 +846,7 @@ static void do_compact_page(struct z3fol } if (!zhdr->foreign_handles && buddy_single(zhdr) && - compact_single_buddy(zhdr)) { + zhdr->mapped_count == 0 && compact_single_buddy(zhdr)) { if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) atomic64_dec(&pool->pages_nr); else @@ -1254,6 +1260,8 @@ static void z3fold_free(struct z3fold_po return; } + if (!page_claimed) + free_handle(handle); if (kref_put(&zhdr->refcount, release_z3fold_page_locked_list)) { atomic64_dec(&pool->pages_nr); return; @@ -1263,7 +1271,6 @@ static void z3fold_free(struct z3fold_po z3fold_page_unlock(zhdr); return; } - free_handle(handle); if (unlikely(PageIsolated(page)) || test_and_set_bit(NEEDS_COMPACTING, &page->private)) { put_z3fold_header(zhdr); _ Patches currently in -mm which might be from vitalywool@xxxxxxxxx are z3fold-avoid-subtle-race-when-freeing-slots.patch z3fold-compact-objects-more-accurately.patch z3fold-protect-handle-reads.patch