Re: NIC Stability Problems Under Xen 4.4 / CentOS 6 / Linux 3.18

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 28/01/17 05:21, Kevin Stange wrote:
On 01/27/2017 06:08 AM, Karel Hendrych wrote:
Have you tried to eliminate all power management features all over?

I've been trying to find and disable all power management features but
having relatively little luck with that solving the problems.  Stabbing
the the dark I've tried different ACPI settings, including completely
disabling it, disabling CPU frequency scaling, and setting pcie_aspm=off
on the kernel command line.  Are there other kernel options that might
be useful to try?

May I chip in here? In our environment we're randomly seeing:

Jan 17 23:40:14 xen01 kernel: ixgbe 0000:04:00.1 eth6: Detected Tx Unit Hang
Jan 17 23:40:14 xen01 kernel:  Tx Queue             <0>
Jan 17 23:40:14 xen01 kernel:  TDH, TDT             <9a>, <127>
Jan 17 23:40:14 xen01 kernel:  next_to_use          <127>
Jan 17 23:40:14 xen01 kernel:  next_to_clean        <98>
Jan 17 23:40:14 xen01 kernel: ixgbe 0000:04:00.1 eth6: tx_buffer_info[next_to_clean]
Jan 17 23:40:14 xen01 kernel:  time_stamp           <218443db3>
Jan 17 23:40:14 xen01 kernel:  jiffies              <218445368>
Jan 17 23:40:14 xen01 kernel: ixgbe 0000:04:00.1 eth6: tx hang 1 detected on queue 0, resetting adapter
Jan 17 23:40:14 xen01 kernel: ixgbe 0000:04:00.1 eth6: Reset adapter
Jan 17 23:40:15 xen01 kernel: ixgbe 0000:04:00.1 eth6: PCIe transaction pending bit also did not clear.
Jan 17 23:40:15 xen01 kernel: ixgbe 0000:04:00.1: master disable timed out
Jan 17 23:40:15 xen01 kernel: bonding: bond1: link status down for interface eth6, disabling it in 200 ms. Jan 17 23:40:15 xen01 kernel: bonding: bond1: link status definitely down for interface eth6, disabling it
[...] repeated every second or so.

Are the devices connected to the same network infrastructure?

There are two onboard NICs and two NICs on a dual-port card in each
server.  All devices connect to a cisco switch pair in VSS and the links
are paired in LACP.

We've been experienced ixgbe stability issues on CentOS 6.x with various 3.x kernels for years with different ixgbe driver versions and, to date, the only way to completely get rid of the issue was to switch from Intel to Broadcom. Just like in your case, the problem pops up randomly and the only reliable temporary fix is to reboot the affected Xen node. Another temporary fix that worked several times but not always was to migrate / shutdown the domUs, deactivate the volume groups, log out of all the iSCSI targets, "ifdown bond1" and "modprobe -r ixgbe" followed by "ifup bond1".

The set up is:
- Intel Dual 10Gb Ethernet - either X520-T2 or X540-T2
- Tried Xen kernels from both xen.crc.id.au and CentoS 6 Xen repos
- LACP bonding to connect to the NFS & iSCSI storage using Brocade VDX6740T fabric. MTU=9000

There has to be something common.

The NICs having issues are running a native VLAN, a tagged VLAN, iSCSI
and NFS traffic, as well as some basic management stuff over SSH, and
they are configured with an MTU of 9000 on the native VLAN.  It's a lot
of features, but I can't really turn them off and then actually have
enough load on the NICs to reproduce the issue.  Several of these
servers were installed and being burned in for 3 months without ever
having an issue, but suddenly collapsed when I tried to bring 20 or so
real-world VMs up on them.

There "appears" to be some sort of load-dependent pattern here too, but it's impossible to confirm it. The only stability improvement I was able to use "dom0_max_vcpus=1 dom0_vcpus_pin". Haven't tried pci=nomsi yet.

The other NICs in the system that are connected don't exhibit issues and
run only VM network interfaces.  They are also in LACP and running VLAN
tags, but normal 1500 MTU.

So far it seems to correlate with NICs on the expansion cards, but it's
a coincidence that these cards are the ones with the storage and
management traffic.  I'm trying to swap some of this load to the onboard
NICs to see if the issues migrate over with it, or if they stay with the
expansion cards.

If the issue exists on both NIC types, then it rules out the specific
NIC chipset as the culprit.  It could point to the driver, but upgrading
it to a newer version did not help and actually appeared to make
everything worse.  This issue might actually be more to do with the PCIe
bridge than the NICs, but these are still different motherboards with
different PCIe bridges (5520 vs C600) experiencing the same issues.

I've been using Intel NICs with Xen/CentOS for ages with no issues.

I figured that must be so.  Everyone uses Intel NICs.  If this was a
common issue, it would probably be causing a lot of people a lot of trouble.


Adi Pircalabu
_______________________________________________
CentOS-virt mailing list
CentOS-virt@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos-virt



[Index of Archives]     [CentOS Users]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [X.org]     [Xfree86]     [Linux USB]

  Powered by Linux