Let's simplify and reduce code indentation. In the RMAP_LEVEL_PTE case, we already check for nr when computing partially_mapped. For RMAP_LEVEL_PMD, it's a bit more confusing. Likely, we don't need the "nr" check, but we could have "nr < nr_pmdmapped" also if we stumbled into the "/* Raced ahead of another remove and an add? */" case. So let's simply move the nr check in there. Note that partially_mapped is always false for small folios. No functional change intended. Cc: Yosry Ahmed <yosryahmed@xxxxxxxxxx> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> --- mm/rmap.c | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index afec4b6800caf..aa900e46cdf82 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1551,11 +1551,12 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, } } - partially_mapped = nr < nr_pmdmapped; + partially_mapped = nr && nr < nr_pmdmapped; break; } - if (nr) { + if (partially_mapped && folio_test_anon(folio) && + list_empty(&folio->_deferred_list)) /* * Queue anon large folio for deferred split if at least one * page of the folio is unmapped and at least one page @@ -1563,10 +1564,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, * * Check partially_mapped first to ensure it is a large folio. */ - if (folio_test_anon(folio) && partially_mapped && - list_empty(&folio->_deferred_list)) - deferred_split_folio(folio); - } + deferred_split_folio(folio); __folio_mod_stat(folio, -nr, -nr_pmdmapped); /* -- 2.45.2