Re: [PATCH v2 1/2] Revert "mm: zswap: fix race between [de]compression and CPU hotunplug"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 8, 2025 at 1:54 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> On Tue, Jan 7, 2025 at 4:34 PM Barry Song <baohua@xxxxxxxxxx> wrote:
> >
> > On Wed, Jan 8, 2025 at 12:39 PM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > >
> > > On Tue, Jan 7, 2025 at 3:01 PM Barry Song <baohua@xxxxxxxxxx> wrote:
> > > >
> > > > On Wed, Jan 8, 2025 at 11:22 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
> > > > >
> > > > > This reverts commit eaebeb93922ca6ab0dd92027b73d0112701706ef.
> > > > >
> > > > > Commit eaebeb93922c ("mm: zswap: fix race between [de]compression and
> > > > > CPU hotunplug") used the CPU hotplug lock in zswap compress/decompress
> > > > > operations to protect against a race with CPU hotunplug making some
> > > > > per-CPU resources go away.
> > > > >
> > > > > However, zswap compress/decompress can be reached through reclaim while
> > > > > the lock is held, resulting in a potential deadlock as reported by
> > > > > syzbot:
> > > > > ======================================================
> > > > > WARNING: possible circular locking dependency detected
> > > > > 6.13.0-rc6-syzkaller-00006-g5428dc1906dd #0 Not tainted
> > > > > ------------------------------------------------------
> > > > > kswapd0/89 is trying to acquire lock:
> > > > >  ffffffff8e7d2ed0 (cpu_hotplug_lock){++++}-{0:0}, at: acomp_ctx_get_cpu mm/zswap.c:886 [inline]
> > > > >  ffffffff8e7d2ed0 (cpu_hotplug_lock){++++}-{0:0}, at: zswap_compress mm/zswap.c:908 [inline]
> > > > >  ffffffff8e7d2ed0 (cpu_hotplug_lock){++++}-{0:0}, at: zswap_store_page mm/zswap.c:1439 [inline]
> > > > >  ffffffff8e7d2ed0 (cpu_hotplug_lock){++++}-{0:0}, at: zswap_store+0xa74/0x1ba0 mm/zswap.c:1546
> > > > >
> > > > > but task is already holding lock:
> > > > >  ffffffff8ea355a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6871 [inline]
> > > > >  ffffffff8ea355a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb58/0x2f30 mm/vmscan.c:7253
> > > > >
> > > > > which lock already depends on the new lock.
> > > >
> > > > We have functions like percpu_is_write_locked(),
> > > > percpu_is_read_locked(), and cpus_read_trylock().
> > > > Could they help prevent circular locking dependencies if we perform a
> > > > check before acquiring the lock?
> > >
> > > Yeah we can do that but it feels a bit hacky, we may have to
> > > unnecessarily fail the operation in some cases, right? Not sure tbh.
> >
> > Not sure if it can be as simple as the following:
> >
> >     locked = cpus_read_trylock();
> >     ....
> >     if (locked)
> >         cpus_read_unlock();
> >
> > If this works, it seems better than migrate_disable(), which could affect
> > the scheduler's select_rq especially given that swap is a hot path :-)
>
> I didn't look too closely into this, but I'd prefer the simpler fix
> unless it causes any noticeable regressions. Unless others are also
> concerned about disabling migration..

Okay, fair enough. It could be hacky, as there's a chance that a write
lock could be acquired by someone else. Waiman's initial patchset to fix
the same issue had an ugly sleep/retry mechanism:

https://lore.kernel.org/all/1532368179-15263-1-git-send-email-longman@xxxxxxxxxx/
https://lore.kernel.org/all/1532368179-15263-3-git-send-email-longman@xxxxxxxxxx/

>
> >
Thanks
Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux