On 27.07.23 13:02, Ryan Roberts wrote:
The recent change to batch-zap anonymous ptes did not take into account that for platforms where MMU_GATHER_NO_GATHER is enabled (e.g. s390), __tlb_remove_page() drops a reference to the page. This means that the folio reference count can drop to zero while still in use (i.e. before folio_remove_rmap_range() is called). This does not happen on other platforms because the actual page freeing is deferred. Solve this by appropriately getting/putting the folio to guarrantee it does not get freed early. Given the new need to get/put the folio in the batch path, let's stick to the non-batched path if the folio is not large. In this case batching is not helpful since the batch size is 1. Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx> Fixes: 904d9713b3b0 ("mm: batch-zap large anonymous folio PTE mappings") Reported-by: Nathan Chancellor <nathan@xxxxxxxxxx> Link: https://lore.kernel.org/linux-mm/20230726161942.GA1123863@dev-arch.thelio-3990X/ --- Hi Andrew, This fixes patch 3 in the series at [1], which is currently in mm-unstable. I'm not sure whether you want to take the fix or whether I should re-post the entire series?
Please repost the complete thing, you're touching some sensible places that really need decent review.
-- Cheers, David / dhildenb