On 15.03.22 18:43, Matthew Wilcox wrote: > On Tue, Mar 15, 2022 at 04:45:13PM +0100, David Hildenbrand wrote: >> On 15.03.22 05:21, Andrew Morton wrote: >>> On Tue, 15 Mar 2022 11:05:15 +0800 Andrew Yang <andrew.yang@xxxxxxxxxxxx> wrote: >>> >>>> When memory is tight, system may start to compact memory for large >>>> continuous memory demands. If one process tries to lock a memory page >>>> that is being locked and isolated for compaction, it may wait a long time >>>> or even forever. This is because compaction will perform non-atomic >>>> PG_Isolated clear while holding page lock, this may overwrite PG_waiters >>>> set by the process that can't obtain the page lock and add itself to the >>>> waiting queue to wait for the lock to be unlocked. >>>> >>>> CPU1 CPU2 >>>> lock_page(page); (successful) >>>> lock_page(); (failed) >>>> __ClearPageIsolated(page); SetPageWaiters(page) (may be overwritten) >>>> unlock_page(page); >>>> >>>> The solution is to not perform non-atomic operation on page flags while >>>> holding page lock. >>> >>> Sure, the non-atomic bitop optimization is really risky and I suspect >>> we reach for it too often. Or at least without really clearly >>> demonstrating that it is safe, and documenting our assumptions. >> >> I agree. IIRC, non-atomic variants are mostly only safe while the >> refcount is 0. Everything else is just absolutely fragile. > > We could add an assertion ... I just tried this: > > +++ b/include/linux/page-flags.h > @@ -342,14 +342,16 @@ static __always_inline \ > void __folio_set_##lname(struct folio *folio) \ > { __set_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ > static __always_inline void __SetPage##uname(struct page *page) \ > -{ __set_bit(PG_##lname, &policy(page, 1)->flags); } > +{ VM_BUG_ON_PGFLAGS(atomic_read(&policy(page, 1)->_refcount), page); \ > + __set_bit(PG_##lname, &policy(page, 1)->flags); } > > #define __CLEARPAGEFLAG(uname, lname, policy) \ > static __always_inline \ > void __folio_clear_##lname(struct folio *folio) \ > { __clear_bit(PG_##lname, folio_flags(folio, FOLIO_##policy)); } \ > static __always_inline void __ClearPage##uname(struct page *page) \ > -{ __clear_bit(PG_##lname, &policy(page, 1)->flags); } > +{ VM_BUG_ON_PGFLAGS(atomic_read(&policy(page, 1)->_refcount), page); \ > + __clear_bit(PG_##lname, &policy(page, 1)->flags); } > > #define TESTSETFLAG(uname, lname, policy) \ > static __always_inline \ > > ... but it dies _really_ early: > > (gdb) bt > #0 0xffffffff820055e5 in native_halt () > at ../arch/x86/include/asm/irqflags.h:57 > #1 halt () at ../arch/x86/include/asm/irqflags.h:98 > #2 early_fixup_exception (regs=regs@entry=0xffffffff81e03cf8, > trapnr=trapnr@entry=6) at ../arch/x86/mm/extable.c:283 > #3 0xffffffff81ff243c in do_early_exception (regs=0xffffffff81e03cf8, > trapnr=6) at ../arch/x86/kernel/head64.c:419 > #4 0xffffffff81ff214f in early_idt_handler_common () > at ../arch/x86/kernel/head_64.S:417 > #5 0x0000000000000000 in ?? () > > and honestly, I'm not sure how to debug something that goes wrong this > early. Maybe I need to make that start warning 5 seconds after boot > or only if we're not in pid 1, or something ... Maybe checking for "system_state >= SYSTEM_RUNNING" or "system_state >= SYSTEM_SCHEDULING" to exclude early boot where no (real) concurrency is happening. But I assume you'll still get plenty of such reports. -- Thanks, David / dhildenb