Re: kernel panics with 4.14.X versions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 16-04-18 15:25:50, Guillaume Morin wrote:
> Fwiw, there have been already reports of similar soft lockups in
> fsnotify() on 4.14: https://lkml.org/lkml/2018/3/2/1038
>
> We have also noticed similar softlockups with 4.14.22 here.

Yeah.
 
> On 16 Apr 13:54, Pavlos Parissis wrote:
> >
> > Hi all,
> > 
> > We have observed kernel panics on several master kubernetes clusters, where we run
> > kubernetes API services and not application workloads.
> > 
> > Those clusters use kernel version 4.14.14 and 4.14.32, but we switched everything
> > to kernel version 4.14.32 as a way to address the issue.
> > 
> > We have HP and Dell hardware on those clusters, and network cards are also different,
> > we have bnx2x and mlx5_core in use.
> > 
> > We also run kernel version 4.14.32 on different type of workloads, software load
> > balancing using HAProxy, and we don't have any crashes there.
> > 
> > Since the crash happens on different hardware, we think it could be a kernel issue,
> > but we aren't sure about it. Thus, I am contacting kernel people in order to get some
> > hint, which can help us to figure out what causes this.
> > 
> > In our kubernetes clusters, we have instructed the kernel to panic upon soft lockup,
> > we use 'kernel.softlockup_panic=1', 'kernel.hung_task_panic=1' and 'kernel.watchdog_thresh=10'.
> > Thus, we see the stack traces. Today, we have disabled this, later I will explain why.
> > 
> > I believe we have two discint types of panics, one is trigger upon soft lockup and another one
> > where the call trace is about scheduler("sched: Unexpected reschedule of offline CPU#8!)
> > 
> > 
> > Let me walk you through the kernel panics and some observations.
> > 
> > The followin series of stack traces are happening when one CPU (CPU 24) is stuck for ~22 seconds.
> > watchdog_thresh is set to 10 and as far as I remember softlockup threshold is (2 * watchdog_thresh),
> > so it makes sense to see the kernel crashing after ~20seconds.
> > 
> > After the stack trace, we have the output of sar for CPU#24 and we see that just before the
> > crash CPU utilization for system level went to 100%. Now let's move to another panic.
> > 
> > [373782.361064] watchdog: BUG: soft lockup - CPU#24 stuck for 22s! [kube-apiserver:24261]
> > [373782.378225] Modules linked in: binfmt_misc sctp_diag sctp dccp_diag dccp tcp_diag udp_diag
> > inet_diag unix_diag cfg80211 rfkill dell_rbu 8021q garp mrp xfs libcrc32c loop x86_pkg_temp_thermal
> > intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel
> > pcbc aesni_intel vfat fat crypto_simd glue_helper cryptd intel_cstate intel_rapl_perf iTCO_wdt ses
> > iTCO_vendor_support mxm_wmi ipmi_si dcdbas enclosure mei_me pcspkr ipmi_devintf lpc_ich sg mei
> > ipmi_msghandler mfd_core shpchp wmi acpi_power_meter netconsole nfsd auth_rpcgss nfs_acl lockd grace
> > sunrpc ip_tables ext4 mbcache jbd2 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt
> > fb_sys_fops sd_mod ttm crc32c_intel ahci libahci mlx5_core drm mlxfw mpt3sas ptp libata raid_class
> > pps_core scsi_transport_sas
> > [373782.516807]  dm_mirror dm_region_hash dm_log dm_mod dax
> > [373782.531739] CPU: 24 PID: 24261 Comm: kube-apiserver Not tainted 4.14.32-1.el7.x86_64 #1
> > [373782.549848] Hardware name: Dell Inc. PowerEdge R630/02C2CP, BIOS 2.4.3 01/17/2017
> > [373782.567486] task: ffff882f66d28000 task.stack: ffffc9002120c000
> > [373782.583441] RIP: 0010:fsnotify+0x197/0x510
> > [373782.597319] RSP: 0018:ffffc9002120fdb8 EFLAGS: 00000286 ORIG_RAX: ffffffffffffff10
> > [373782.615308] RAX: 0000000000000000 RBX: ffff882f9ec65c20 RCX: 0000000000000002
> > [373782.632950] RDX: 0000000000028700 RSI: 0000000000000002 RDI: ffffffff8269a4e0
> > [373782.650616] RBP: ffffc9002120fe98 R08: 0000000000000000 R09: 0000000000000000
> > [373782.668287] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
> > [373782.685918] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
> > [373782.703302] FS:  000000c42009f090(0000) GS:ffff882fbf900000(0000) knlGS:0000000000000000
> > [373782.721887] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > [373782.737741] CR2: 00007f82b6539244 CR3: 0000002f3de2a005 CR4: 00000000003606e0
> > [373782.755247] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > [373782.772722] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> > [373782.790043] Call Trace:
> > [373782.802041]  vfs_write+0x151/0x1b0
> > [373782.815081]  ? syscall_trace_enter+0x1cd/0x2b0
> > [373782.829175]  SyS_write+0x55/0xc0
> > [373782.841870]  do_syscall_64+0x79/0x1b0
> > [373782.855073]  entry_SYSCALL_64_after_hwframe+0x3d/0xa2

Can you please run RIP through ./scripts/faddr2line to see where exactly
are we looping? I expect the loop iterating over marks to notify but better
be sure.

How easily can you hit this? Are you able to run debug kernels / inspect
crash dumps when the issue occurs? Also testing with the latest mainline
kernel (4.16) would be welcome whether this isn't just an issue with the
backport of fsnotify fixes from Miklos.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux