On 2023/4/2 21:36, Qi Zheng wrote:
Hi Andrew,
On 2023/4/1 06:04, Andrew Morton wrote:
On Fri, 31 Mar 2023 17:58:57 +0800 Qi Zheng
<zhengqi.arch@xxxxxxxxxxxxx> wrote:
In folio_batch_move_lru(), the folio_batch is not freshly
initialised, so it should call folio_batch_reinit() as
pagevec_lru_move_fn() did before.
...
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -222,7 +222,7 @@ static void folio_batch_move_lru(struct
folio_batch *fbatch, move_fn_t move_fn)
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
folios_put(fbatch->folios, folio_batch_count(fbatch));
- folio_batch_init(fbatch);
+ folio_batch_reinit(fbatch);
}
static void folio_batch_add_and_move(struct folio_batch *fbatch,
Well... why? This could leave the kernel falsely thinking that the
folio's pages have been drained from the per-cpu LRU addition
magazines.
Maybe that's desirable, maybe not, but I think this change needs much
much more explanation describing why it is beneficial.
folio_batch_reinit() seems to be a custom thing for the mlock code -
perhaps it just shouldn't exist, and its operation should instead be
open-coded in mlock_folio_batch().
The folio_batch_reinit() corresponds to pagevec_reinit(),
the pagevec_reinit() was originally used in pagevec_lru_move_fn()
and mlock_pagevec(), not a custom thing for the mlock code.
The commit c2bc16817aa0 ("mm/swap: add folio_batch_move_lru()")
introduces folio_batch_move_lru() to replace pagevec_lru_move_fn(),
but calls folio_batch_init() (corresponding to pagevec_init()) instead
of folio_batch_reinit() (corresponding to pagevec_reinit()). This
change was not explained in the commit message and seems like an
oversight.
The dynamics and rules around ->percpu_pvec_drained are a bit
mysterious. A code comment which explains all of this would be
useful.
The commit d9ed0d08b6c6 ("mm: only drain per-cpu pagevecs once per
pagevec usage") originally introduced the ->drained (which was later
renamed to ->percpu_pvec_drained by commit 7f0b5fb953e7), which is
intended to drain per-cpu pagevecs only once per pagevec usage.
Maybe it would be better to add the following code comment:
diff --git a/mm/swap.c b/mm/swap.c
index 423199ee8478..107c4a13e476 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1055,6 +1055,7 @@ EXPORT_SYMBOL(release_pages);
*/
void __pagevec_release(struct pagevec *pvec)
{
+ /* Only drain per-cpu pagevecs once per pagevec usage */
if (!pvec->percpu_pvec_drained) {
lru_add_drain();
pvec->percpu_pvec_drained = true;
Please let me know if I missed something.
Maybe the commit message can be modified as follows:
```
The ->percpu_pvec_drained was originally introduced by commit
d9ed0d08b6c6 ("mm: only drain per-cpu pagevecs once per pagevec usage")
to drain per-cpu pagevecs only once per pagevec usage. But after
commit c2bc16817aa0 ("mm/swap: add folio_batch_move_lru()"), the
->percpu_pvec_drained will be reset to false by calling
folio_batch_init() in folio_batch_move_lru(), which may cause per-cpu
pagevecs to be drained multiple times per pagevec usage. This is not
what we expected, let's use folio_batch_reinit() in
folio_batch_move_lru() to fix it.
```
Also +CC Mel Gorman to confirm this. :)
Thanks,
Qi
Thanks,
Qi
--
Thanks,
Qi