Re: [PATCH] mm: mglru: Fix soft lockup attributed to scanning folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 8, 2024 at 1:06 AM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Thu,  7 Mar 2024 11:19:52 +0800 Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
>
> > After we enabled mglru on our 384C1536GB production servers, we
> > encountered frequent soft lockups attributed to scanning folios.
> >
> > The soft lockup as follows,
> >
> > ...
> >
> > There were a total of 22 tasks waiting for this spinlock
> > (RDI: ffff99d2b6ff9050):
> >
> >  crash> foreach RU bt | grep -B 8  queued_spin_lock_slowpath |  grep "RDI: ffff99d2b6ff9050" | wc -l
> >  22
>
> If we're holding the lock for this long then there's a possibility of
> getting hit by the NMI watchdog also.

The NMI watchdog is disabled as these servers are KVM guest.

    kernel.nmi_watchdog = 0
    kernel.soft_watchdog = 1

>
> > Additionally, two other threads were also engaged in scanning folios, one
> > with 19 waiters and the other with 15 waiters.
> >
> > To address this issue under heavy reclaim conditions, we introduced a
> > hotfix version of the fix, incorporating cond_resched() in scan_folios().
> > Following the application of this hotfix to our servers, the soft lockup
> > issue ceased.
> >
> > ...
> >
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -4367,6 +4367,10 @@ static int scan_folios(struct lruvec *lruvec, struct scan_control *sc,
> >
> >                       if (!--remaining || max(isolated, skipped_zone) >= MIN_LRU_BATCH)
> >                               break;
> > +
> > +                     spin_unlock_irq(&lruvec->lru_lock);
> > +                     cond_resched();
> > +                     spin_lock_irq(&lruvec->lru_lock);
> >               }
>
> Presumably wrapping this with `if (need_resched())' will save some work.

good suggestion.

>
> This lock is held for a reason.  I'd like to see an analysis of why
> this change is safe.

I believe the key point here is whether we can reduce the scope of
this lock from:

  evict_folios
      spin_lock_irq(&lruvec->lru_lock);
      scanned = isolate_folios(lruvec, sc, swappiness, &type, &list);
      scanned += try_to_inc_min_seq(lruvec, swappiness);
      if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS)
          scanned = 0;
      spin_unlock_irq(&lruvec->lru_lock);

to:

  evict_folios
      spin_lock_irq(&lruvec->lru_lock);
      scanned = isolate_folios(lruvec, sc, swappiness, &type, &list);
      spin_unlock_irq(&lruvec->lru_lock);

      spin_lock_irq(&lruvec->lru_lock);
      scanned += try_to_inc_min_seq(lruvec, swappiness);
      if (get_nr_gens(lruvec, !swappiness) == MIN_NR_GENS)
          scanned = 0;
      spin_unlock_irq(&lruvec->lru_lock);

In isolate_folios(), it merely utilizes the min_seq to retrieve the
generation without modifying it. If multiple tasks are running
evict_folios() concurrently, it seems inconsequential whether min_seq
is incremented by one task or another. I'd appreciate Yu's
confirmation on this matter.

-- 
Regards
Yafang





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux