On 10.08.24 08:15, yangge1116@xxxxxxx wrote:
From: yangge <yangge1116@xxxxxxx>
Whoever passes a folio to __folio_batch_add_and_move() must hold
a reference, otherwise something else would already be messed up.
If the folio is referenced, it will not be freed elsewhere, so we
can safely clear the folio's lru flag. As discussed with David
in [1], we should take the reference after testing the LRU flag,
not before.
Link: https://lore.kernel.org/lkml/d41865b4-d6fa-49ba-890a-921eefad27dd@xxxxxxxxxx/ [1]
Signed-off-by: yangge <yangge1116@xxxxxxx>
---
mm/swap.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 67a2467..6b83898 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -226,12 +226,10 @@ static void __folio_batch_add_and_move(struct folio_batch __percpu *fbatch,
{
unsigned long flags;
- folio_get(folio);
-
- if (on_lru && !folio_test_clear_lru(folio)) {
- folio_put(folio);
+ if (on_lru && !folio_test_clear_lru(folio))
return;
- }
+
+ folio_get(folio);
if (disable_irq)
local_lock_irqsave(&cpu_fbatches.lock_irq, flags);
Ah, "mm/swap: remove boilerplate" reduced them to a single instance, good.
You can throw in a
VM_WARN_ON_ONCE(!folio_ref_count(folio));
before the folio_get(folio)
so we would realize when something very unexpected happens.
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
--
Cheers,
David / dhildenb