Re: Hard and soft lockups with FIO and LTP runs on a large system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17-Jul-24 4:59 PM, Mateusz Guzik wrote:
On Wed, Jul 17, 2024 at 11:42 AM Vlastimil Babka <vbabka@xxxxxxx> wrote:

On 7/3/24 5:11 PM, Bharata B Rao wrote:
The general observation is that the problem usually surfaces when the
system free memory goes very low and page cache/buffer consumption hits
the ceiling. Most of the times the two contended locks are lruvec and
inode->i_lock spinlocks.

[snip mm stuff]

There are numerous avoidable i_lock acquires (including some only
showing up under load), but I don't know if they play any role in this
particular test.

Collecting all traces would definitely help, locked up or not, for example:
bpftrace -e 'kprobe:queued_spin_lock_slowpath { @[kstack()] = count();
}' -o traces

Here are the top 3 traces collected while the full list from a 30s collection duration when the workload was running, is attached.

@[
    native_queued_spin_lock_slowpath+1
    __remove_mapping+98
    remove_mapping+22
    mapping_evict_folio+118
    mapping_try_invalidate+214
    invalidate_mapping_pages+16
    invalidate_bdev+60
    blkdev_common_ioctl+1527
    blkdev_ioctl+265
    __x64_sys_ioctl+149
    x64_sys_call+4629
    do_syscall_64+126
    entry_SYSCALL_64_after_hwframe+118
]: 1787212
@[
    native_queued_spin_lock_slowpath+1
    folio_wait_bit_common+205
    filemap_get_pages+1543
    filemap_read+231
    blkdev_read_iter+111
    aio_read+242
    io_submit_one+546
    __x64_sys_io_submit+132
    x64_sys_call+6617
    do_syscall_64+126
    entry_SYSCALL_64_after_hwframe+118
]: 7922497
@[
    native_queued_spin_lock_slowpath+1
    clear_shadow_entry+92
    mapping_try_invalidate+337
    invalidate_mapping_pages+16
    invalidate_bdev+60
    blkdev_common_ioctl+1527
    blkdev_ioctl+265
    __x64_sys_ioctl+149
    x64_sys_call+4629
    do_syscall_64+126
    entry_SYSCALL_64_after_hwframe+118
]: 10357614


As for clear_shadow_entry mentioned in the opening mail, the content is:
         spin_lock(&mapping->host->i_lock);
         xa_lock_irq(&mapping->i_pages);
         __clear_shadow_entry(mapping, index, entry);
         xa_unlock_irq(&mapping->i_pages);
         if (mapping_shrinkable(mapping))
                 inode_add_lru(mapping->host);
         spin_unlock(&mapping->host->i_lock);

so for all I know it's all about the xarray thing, not the i_lock per se.

The soft lockup signature has _raw_spin_lock and not _raw_spin_lock_irq and hence concluded it to be i_lock. Re-pasting the clear_shadow_entry softlockup here again:

kernel: watchdog: BUG: soft lockup - CPU#29 stuck for 11s! [fio:2701649]
kernel: CPU: 29 PID: 2701649 Comm: fio Tainted: G             L
6.10.0-rc3-mglru-irqstrc #24
kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x2b4/0x300
kernel: Call Trace:
kernel:  <IRQ>
kernel:  ? show_regs+0x69/0x80
kernel:  ? watchdog_timer_fn+0x223/0x2b0
kernel:  ? __pfx_watchdog_timer_fn+0x10/0x10
<SNIP>
kernel:  </IRQ>
kernel:  <TASK>
kernel:  ? asm_sysvec_apic_timer_interrupt+0x1b/0x20
kernel:  ? native_queued_spin_lock_slowpath+0x2b4/0x300
kernel:  _raw_spin_lock+0x38/0x50
kernel:  clear_shadow_entry+0x3d/0x100
kernel:  ? __pfx_workingset_update_node+0x10/0x10
kernel:  mapping_try_invalidate+0x117/0x1d0
kernel:  invalidate_mapping_pages+0x10/0x20
kernel:  invalidate_bdev+0x3c/0x50
kernel:  blkdev_common_ioctl+0x5f7/0xa90
kernel:  blkdev_ioctl+0x109/0x270
kernel:  x64_sys_call+0x1215/0x20d0
kernel:  do_syscall_64+0x7e/0x130

Regards,
Bharata.

Attachment: traces.gz
Description: GNU Zip compressed data


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux