On 10/16/23 22:54, Michael Roth wrote: > On Mon, Oct 16, 2023 at 07:31:22PM +0300, Kirill A. Shutemov wrote: >> Michael reported soft lockups on a system that has unaccepted memory. >> This occurs when a user attempts to allocate and accept memory on >> multiple CPUs simultaneously. >> >> The root cause of the issue is that memory acceptance is serialized with >> a spinlock, allowing only one CPU to accept memory at a time. The other >> CPUs spin and wait for their turn, leading to starvation and soft lockup >> reports. >> >> To address this, the code has been modified to release the spinlock >> while accepting memory. This allows for parallel memory acceptance on >> multiple CPUs. >> >> A newly introduced "accepting_list" keeps track of which memory is >> currently being accepted. This is necessary to prevent parallel >> acceptance of the same memory block. If a collision occurs, the lock is >> released and the process is retried. >> >> Such collisions should rarely occur. The main path for memory acceptance >> is the page allocator, which accepts memory in MAX_ORDER chunks. As long >> as MAX_ORDER is equal to or larger than the unit_size, collisions will >> never occur because the caller fully owns the memory block being >> accepted. >> >> Aside from the page allocator, only memblock and deferered_free_range() >> accept memory, but this only happens during boot. >> >> The code has been tested with unit_size == 128MiB to trigger collisions >> and validate the retry codepath. >> >> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> >> Reported-by: Michael Roth <michael.roth@xxxxxxx > > Tested-by: Michael Roth <michael.roth@xxxxxxx> > > This seems to improve things pretty dramatically for me. Previously I > saw soft-lockups with 16 vCPUs and 16 processes faulting into memory, > and now I can do 128+ vCPUs/processes. > > I can still trigger soft lock-ups on occassion if the number of processes > faulting in memory exceeds the number of vCPUs available to the guest, but > with a 32 vCPU guest even something like this: > > stress --vm 128 --vm-bytes 2G --vm-keep --cpu 255 > > still seems to avoid the soft lock-up messages. So that's probably well > into "potential future optimization" territory and this patch fixes the > more immediate issues. Do you mean that the guest pretends it has more cpus than the host provides to it? I think such cpu starving configuration is prone to softlockups already, so it wouldn't be new. If you mean the guest has as many cpus as the host provides to it, but you stress with many more than that number of processes, then I wonder how softlockups would happen due to the extra processes. Since irqs are disabled through the whole operation, the extra processes can't become scheduled, and not being scheduled due to overloading doesn't trigger softlockups, hmm... > Thanks! > > -Mike > >> Fixes: 2053bc57f367 ("efi: Add unaccepted memory support") >> Cc: <stable@xxxxxxxxxx> >> Reviewed-by: Nikolay Borisov <nik.borisov@xxxxxxxx> >> --- >> >> v2: >> - Fix deadlock (Vlastimil); >> - Fix comments (Vlastimil); >> - s/cond_resched()/cpu_relax()/ -- cond_resched() cannot be called >> from atomic context; >>