On Mon, Nov 4, 2024 at 3:04 PM Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > On Mon, Nov 4, 2024 at 2:38 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > > > On Mon, 4 Nov 2024 10:30:29 -0700 Yu Zhao <yuzhao@xxxxxxxxxx> wrote: > > > > > On Sat, Oct 26, 2024 at 09:26:04AM -0600, Yu Zhao wrote: > > > > On Sat, Oct 26, 2024 at 12:34 AM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > > > > > > > > > > On Thu, Oct 24, 2024 at 06:23:02PM GMT, Shakeel Butt wrote: > > > > > > While updating the generation of the folios, MGLRU requires that the > > > > > > folio's memcg association remains stable. With the charge migration > > > > > > deprecated, there is no need for MGLRU to acquire locks to keep the > > > > > > folio and memcg association stable. > > > > > > > > > > > > Signed-off-by: Shakeel Butt <shakeel.butt@xxxxxxxxx> > > > > > > > > > > Andrew, can you please apply the following fix to this patch after your > > > > > unused fixup? > > > > > > > > Thanks! > > > > > > syzbot caught the following: > > > > > > WARNING: CPU: 0 PID: 85 at mm/vmscan.c:3140 folio_update_gen+0x23d/0x250 mm/vmscan.c:3140 > > > ... > > > > > > Andrew, can you please fix this in place? > > > > OK, but... > > > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -3138,7 +3138,6 @@ static int folio_update_gen(struct folio *folio, int gen) > > > unsigned long new_flags, old_flags = READ_ONCE(folio->flags); > > > > > > VM_WARN_ON_ONCE(gen >= MAX_NR_GENS); > > > - VM_WARN_ON_ONCE(!rcu_read_lock_held()); > > > > > > do { > > > /* lru_gen_del_folio() has isolated this page? */ > > > > it would be good to know why this assertion is considered incorrect? > > The assertion was caused by the patch in this thread. It used to > assert that a folio must be protected from charge migration. Charge > migration is removed by this series, and as part of the effort, this > patch removes the RCU lock. > > > And a link to the sysbot report? > > https://syzkaller.appspot.com/bug?extid=24f45b8beab9788e467e Or this link would work better: https://lore.kernel.org/lkml/67294349.050a0220.701a.0010.GAE@xxxxxxxxxx/