st 27. 3. 2024 v 4:11 odesílatel Jason Wang <jasowang@xxxxxxxxxx> napsal: > > On Tue, Mar 26, 2024 at 9:26 PM Jaroslav Pulchart > <jaroslav.pulchart@xxxxxxxxxxxx> wrote: > > > > > > > > On Mon, Mar 25, 2024 at 4:44 PM Igor Raits <igor@xxxxxxxxxxxx> wrote: > > > > > > > > Hello, > > > > > > > > On Fri, Mar 22, 2024 at 12:19 PM Igor Raits <igor@xxxxxxxxxxxx> wrote: > > > > > > > > > > Hi Jason, > > > > > > > > > > On Fri, Mar 22, 2024 at 9:39 AM Igor Raits <igor@xxxxxxxxxxxx> wrote: > > > > > > > > > > > > Hi Jason, > > > > > > > > > > > > On Fri, Mar 22, 2024 at 6:31 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:44 PM Igor Raits <igor@xxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > Hello Jason & others, > > > > > > > > > > > > > > > > On Wed, Mar 20, 2024 at 10:33 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > > On Tue, Mar 19, 2024 at 9:15 PM Igor Raits <igor@xxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > > > > Hello Stefan, > > > > > > > > > > > > > > > > > > > > On Tue, Mar 19, 2024 at 2:12 PM Stefan Hajnoczi <stefanha@xxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > > > > > > On Tue, Mar 19, 2024 at 10:00:08AM +0100, Igor Raits wrote: > > > > > > > > > > > > Hello, > > > > > > > > > > > > > > > > > > > > > > > > We have started to observe kernel crashes on 6.7.y kernels (atm we > > > > > > > > > > > > have hit the issue 5 times on 6.7.5 and 6.7.10). On 6.6.9 where we > > > > > > > > > > > > have nodes of cluster it looks stable. Please see stacktrace below. If > > > > > > > > > > > > you need more information please let me know. > > > > > > > > > > > > > > > > > > > > > > > > We do not have a consistent reproducer but when we put some bigger > > > > > > > > > > > > network load on a VM, the hypervisor's kernel crashes. > > > > > > > > > > > > > > > > > > > > > > > > Help is much appreciated! We are happy to test any patches. > > > > > > > > > > > > > > > > > > > > > > CCing Michael Tsirkin and Jason Wang for vhost_net. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > [62254.167584] stack segment: 0000 [#1] PREEMPT SMP NOPTI > > > > > > > > > > > > [62254.173450] CPU: 63 PID: 11939 Comm: vhost-11890 Tainted: G > > > > > > > > > > > > E 6.7.10-1.gdc.el9.x86_64 #1 > > > > > > > > > > > > > > > > > > > > > > Are there any patches in this kernel? > > > > > > > > > > > > > > > > > > > > Only one, unrelated to this part. Removal of pr_err("EEVDF scheduling > > > > > > > > > > fail, picking leftmost\n"); line (reported somewhere few months ago > > > > > > > > > > and it was suggested workaround until proper solution comes). > > > > > > > > > > > > > > > > > > Btw, a bisection would help as well. > > > > > > > > > > > > > > > > In the end it seems like we don't really have "stable" setup, so > > > > > > > > bisection looks to be useless but we did find few things meantime: > > > > > > > > > > > > > > > > 1. On 6.6.9 it crashes either with unexpected GSO type or usercopy: > > > > > > > > Kernel memory exposure attempt detected from SLUB object > > > > > > > > 'skbuff_head_cache' > > > > > > > > > > > > > > Do you have a full calltrace for this? > > > > > > > > > > > > I have shared it in one of the messages in this thread. > > > > > > https://marc.info/?l=linux-virtualization&m=171085443512001&w=2 > > > > > > > > > > > > > > 2. On 6.7.5, 6.7.10 and 6.8.1 it crashes with RIP: > > > > > > > > 0010:skb_release_data+0xb8/0x1e0 > > > > > > > > > > > > > > And for this? > > > > > > > > > > > > https://marc.info/?l=linux-netdev&m=171083870801761&w=2 > > > > > > > > > > > > > > 3. It does NOT crash on 6.8.1 when VM does not have multi-queue setup > > > > > > > > > > > > > > > > Looks like the multi-queue setup (we have 2 interfaces × 3 virtio > > > > > > > > queues for each) is causing problems as if we set only one queue for > > > > > > > > each interface the issue is gone. > > > > > > > > Maybe there is some race condition in __pfx_vhost_task_fn+0x10/0x10 or > > > > > > > > somewhere around? > > > > > > > > > > > > > > I can't tell now, but it seems not because if we have 3 queue pairs we > > > > > > > will have 3 vhost threads. > > > > > > > > > > > > > > > We have noticed that there are 3 of such functions > > > > > > > > in the stacktrace that gave us hints about what we could try… > > > > > > > > > > > > > > Let's try to enable SLUB_DEBUG and KASAN to see if we can get > > > > > > > something interesting. > > > > > > > > > > > > We were able to reproduce it even with 1 vhost queue... And now we > > > > > > have slub_debug + kasan so I hopefully have more useful data for you > > > > > > now. > > > > > > I have attached it for better readability. > > > > > > > > > > Looks like we have found a "stable" kernel and that is 6.1.32. The > > > > > 6.3.y is broken and we are testing 6.2.y now. > > > > > My guess it would be related to virtio/vsock: replace virtio_vsock_pkt > > > > > with sk_buff that was done around that time but we are going to test, > > > > > bisect and let you know more. > > > > > > > > So we have been trying to bisect it but it is basically impossible for > > > > us to do so as the ICE driver was quite broken for most of the release > > > > cycle so we have no networking on 99% of the builds and we can't test > > > > such a setup. > > > > More specifically, the bug was introduced between 6.2 and 6.3 but we > > > > could not get much further. The last good commit we were able to test > > > > was f18f9845f2f10d3d1fc63e4ad16ee52d2d9292fa and then after 20 commits > > > > where we had no networking we gave up. > > > > > > > > If you have some suspicious commit(s) we could revert - happy to test. > > > > > > Here is the is for the change since f18f9845f2f10d3d1fc63e4ad16ee52d2d9292fa: > > > > > > cbfbfe3aee71 tun: prevent negative ifindex > > > b2f8323364ab tun: add __exit annotations to module exit func tun_cleanup() > > > 6231e47b6fad tun: avoid high-order page allocation for packet header > > > 4d016ae42efb Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net > > > 59eeb2329405 drivers: net: prevent tun_build_skb() to exceed the > > > packet size limit > > > 35b1b1fd9638 Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net > > > ce7c7fef1473 net: tun: change tun_alloc_skb() to allow bigger paged allocations > > > 9bc3047374d5 net: tun_chr_open(): set sk_uid from current_fsuid() > > > 82b2bc279467 tun: Fix memory leak for detached NAPI queue. > > > 6e98b09da931 Merge tag 'net-next-6.4' of > > > git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next > > > de4f5fed3f23 iov_iter: add iter_iovec() helper > > > 438b406055cd tun: flag the device as supporting FMODE_NOWAIT > > > de4287336794 Daniel Borkmann says: > > > a096ccca6e50 tun: tun_chr_open(): correctly initialize socket uid > > > 66c0e13ad236 drivers: net: turn on XDP features > > > > > > The commit that touches the datapath are: > > > > > > 6231e47b6fad tun: avoid high-order page allocation for packet header > > > 59eeb2329405 drivers: net: prevent tun_build_skb() to exceed the > > > packet size limit > > > ce7c7fef1473 net: tun: change tun_alloc_skb() to allow bigger paged allocations > > > 82b2bc279467 tun: Fix memory leak for detached NAPI queue. > > > de4f5fed3f23 iov_iter: add iter_iovec() helper > > > > > > I assume you didn't use NAPI mode, so 82b2bc279467 tun: Fix memory > > > leak for detached NAPI queue doesn't make sense for us. > > > > > > The rest might be the bad commit if it is caused by a change of tun itself. > > > > > > btw I vaguely remember KASAN will report who did the allocation and > > > who did the free. But it seems not in your KASAN log. > > > > > > Thanks > > > > > > > > > > > Thanks again. > > > > > > > > > > > Hello > > > > We have one observation. The occurrence of the error depends on the > > ring buffer size of physical network cards. We have two E810 Intel > > cards bonded by two interfaces (em1 + p3p2, ice driver) into single > > bon0. The bond0 is then linux bridged and/or ovs(witched) to VMs via > > tun interfaces (both switch solutions have the same problem). VMs are > > qemu-kvm instances and using vhost/virtio-net. > > > > We see: > > 1/ The issue is triggered almost instantaneously when tx/rx ring > > buffer is set to 2048 (our default) > > ethtool -G em1 rx 2048 tx 2048 > > ethtool -G p3p1 rx 2048 tx 2048 > > > > 2/ Similar issue is triggered when the tx/rx ring buffer is set to > > 4096: the host does not crash immediately, but some trace is shown > > soon and later it gets into memory pressure and crashes. > > This is probably a hint of memory leak somewhere. > > > ethtool -G em1 rx 4096 tx 4096 > > ethtool -G p3p1 rx 4096 tx 4096 > > See attached ring_4096.kasan.txt (vanila 6.8.1 with enabled KASAN) and > > ring_4096.txt (vanila 6.8.1 without kasan) > > > > 3/ The system is stable or we just can-not trigger the issue if the > > ring buffer is >= 6144. > > ethtool -G em1 rx 7120 tx 7120 > > ethtool -G p3p1 rx 7120 tx 7120 > > > > could it be influenced by a some rate of dropped packets in the ring buffer? > > I can't tell. > > Btw, it looks like the logs were cut off. Could we get a full log? I took it from the server console and was truncated by my copy-paste issue. Re-attaching the de-truncated log as "ring_4096.log" file. > > Thanks > > > > > # for i in em1 p3p1; do ethtool -S ${i} | grep dropped.nic; done > > rx_dropped.nic: 158225 > > rx_dropped.nic: 74285 > > > > Best, > > Jaroslav Pulchart >
Mar 25 11:23:39 cmp0220 kernel: ------------[ cut here ]------------ Mar 25 11:23:39 cmp0220 kernel: virt_to_cache: Object is not a Slab page! Mar 25 11:23:39 cmp0220 kernel: WARNING: CPU: 32 PID: 11044 at mm/slub.c:4328 kmem_cache_free+0x301/0x3d0 Mar 25 11:23:39 cmp0220 kernel: Modules linked in: mptcp_diag(E) xsk_diag(E) raw_diag(E) unix_diag(E) af_packet_diag(E) netlink_diag(E) udp_diag(E) tcp_diag(E) ine t_diag(E) ebt_arp(E) nft_meta_bridge(E) xt_CT(E) xt_mac(E) xt_set(E) xt_conntrack(E) xt_comment(E) xt_physdev(E) nft_compat(E) ip_set_hash_net(E) ip_set(E) vhost_n et(E) vhost(E) vhost_iotlb(E) tap(E) tun(E) rpcsec_gss_krb5(E) auth_rpcgss(E) nfsv4(E) dns_resolver(E) nfs(E) lockd(E) grace(E) netfs(E) netconsole(E) ib_core(E) s csi_transport_iscsi(E) nf_tables(E) nfnetlink(E) target_core_mod(E) macvlan(E) 8021q(E) garp(E) mrp(E) bonding(E) tls(E) binfmt_misc(E) dell_rbu(E) sunrpc(E) btrfs (E) xor(E) zstd_compress(E) vfat(E) fat(E) dm_service_time(E) raid6_pq(E) dm_multipath(E) ipmi_ssif(E) intel_rapl_msr(E) intel_rapl_common(E) amd64_edac(E) edac_mc e_amd(E) kvm_amd(E) kvm(E) dell_smbios(E) acpi_ipmi(E) irqbypass(E) dcdbas(E) wmi _bmof(E) dell_wmi_descriptor(E) ipmi_si(E) mgag200(E) rapl(E) i2c_algo_bit(E) acpi_cpufreq(E) ipmi_devintf(E) ptdma(E) i2c_piix4(E) wmi(E) k10temp(E) Mar 25 11:23:39 cmp0220 kernel: ipmi_msghandler(E) acpi_power_meter(E) fuse(E) zram(E) ext4(E) mbcache(E) jbd2(E) dm_crypt(E) sd_mod(E) t10_pi(E) sg(E) crct10dif_pclmul(E) ahci(E) crc32_pclmul(E) polyval_clmulni(E) ice(E) libahci(E) polyval_generic(E) ghash_clmulni_intel(E) sha512_ssse3(E) libata(E) megaraid_sas(E) gnss(E) ccp(E) sp5100_tco(E) dm_mirror(E) dm_region_hash(E) dm_log(E) dm_mod(E) nf_conntrack(E) libcrc32c(E) crc32c_intel(E) nf_defrag_ipv6(E) nf_defrag_ipv4(E) br_netfilter(E) bridge(E) stp(E) llc(E) Mar 25 11:23:39 cmp0220 kernel: Unloaded tainted modules: fjes(E):2 padlock_aes(E):3 Mar 25 11:23:39 cmp0220 kernel: CPU: 32 PID: 11044 Comm: vhost-10998 Tainted: G E 6.8.1-1.gdc.el9.x86_64 #1 Mar 25 11:23:39 cmp0220 kernel: Hardware name: Dell Inc. PowerEdge R7525/0H3K7P, BIOS 2.14.1 12/17/2023 Mar 25 11:23:39 cmp0220 kernel: RIP: 0010:kmem_cache_free+0x301/0x3d0 Mar 25 11:23:39 cmp0220 kernel: Code: fd ff ff 80 3d 3a 9f f5 01 00 0f 85 bf fe ff ff 48 c7 c6 50 4d a7 8e 48 c7 c7 40 69 fd 8e c6 05 1f 9f f5 01 01 e8 1f 44 d3 ff <0f> 0b e9 9e fe ff ff 48 8d 42 ff e9 63 fd ff ff 4c 8d 68 ff e9 e3 Mar 25 11:23:39 cmp0220 kernel: RSP: 0018:ffffab12de55bc80 EFLAGS: 00010282 Mar 25 11:23:39 cmp0220 kernel: RAX: 0000000000000000 RBX: ffff91d2b4dd9800 RCX: 0000000000000000 Mar 25 11:23:39 cmp0220 kernel: RDX: ffff91b27fcad800 RSI: ffff91b27fca0a40 RDI: ffff91b27fca0a40 Mar 25 11:23:39 cmp0220 kernel: RBP: ffffab12de55bcc8 R08: 0000000000000000 R09: 00000000ffff7fff Mar 25 11:23:39 cmp0220 kernel: R10: ffffab12de55bb20 R11: ffffffff8fbe2968 R12: ffff91d334dd9800 Mar 25 11:23:39 cmp0220 kernel: R13: 0000000000000000 R14: 0000000000000001 R15: 00000000000064e4 Mar 25 11:23:39 cmp0220 kernel: FS: 00007f817fe32f80(0000) GS:ffff91b27fc80000(0000) knlGS:0000000000000000 Mar 25 11:23:39 cmp0220 kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 25 11:23:39 cmp0220 kernel: CR2: 000000c1ad59b000 CR3: 0000014035e20001 CR4: 0000000000770ef0 Mar 25 11:23:39 cmp0220 kernel: PKRU: 55555554 Mar 25 11:23:39 cmp0220 kernel: Call Trace: Mar 25 11:23:39 cmp0220 kernel: <TASK> Mar 25 11:23:39 cmp0220 kernel: ? __warn+0x80/0x130 Mar 25 11:23:39 cmp0220 kernel: ? kmem_cache_free+0x301/0x3d0 Mar 25 11:23:39 cmp0220 kernel: ? report_bug+0x195/0x1a0 Mar 25 11:23:39 cmp0220 kernel: ? prb_read_valid+0x17/0x20 Mar 25 11:23:39 cmp0220 kernel: ? handle_bug+0x3c/0x70 Mar 25 11:23:39 cmp0220 kernel: ? exc_invalid_op+0x14/0x70 Mar 25 11:23:39 cmp0220 kernel: ? asm_exc_invalid_op+0x16/0x20 Mar 25 11:23:39 cmp0220 kernel: ? kmem_cache_free+0x301/0x3d0 Mar 25 11:23:39 cmp0220 kernel: ? skb_release_data+0x107/0x1e0 Mar 25 11:23:39 cmp0220 kernel: tun_do_read+0x68/0x1f0 [tun] Mar 25 11:23:39 cmp0220 kernel: tun_recvmsg+0x7e/0x160 [tun] Mar 25 11:23:39 cmp0220 kernel: handle_rx+0x3ab/0x750 [vhost_net] Mar 25 11:23:39 cmp0220 kernel: ? init_numa_balancing+0xd7/0x1e0 Mar 25 11:23:39 cmp0220 kernel: vhost_worker+0x42/0x70 [vhost] Mar 25 11:23:39 cmp0220 kernel: vhost_task_fn+0x4b/0xb0 Mar 25 11:23:39 cmp0220 kernel: ? finish_task_switch.isra.0+0x8f/0x2a0 Mar 25 11:23:39 cmp0220 kernel: ? __pfx_vhost_task_fn+0x10/0x10 Mar 25 11:23:39 cmp0220 kernel: ? __pfx_vhost_task_fn+0x10/0x10 Mar 25 11:23:39 cmp0220 kernel: ret_from_fork+0x2d/0x50 Mar 25 11:23:39 cmp0220 kernel: ? __pfx_vhost_task_fn+0x10/0x10 Mar 25 11:23:39 cmp0220 kernel: ret_from_fork_asm+0x1b/0x30 Mar 25 11:23:39 cmp0220 kernel: </TASK> Mar 25 11:23:39 cmp0220 kernel: ---[ end trace 0000000000000000 ]---