You may want to install sysstat (sar) and look at the rates by doing sar -n EDEV and sar -n DEV to compare the drops vs packets. In my experience on critical production systems if the rate is less than 1 (drop/error/...) per 10,000 packets then in general you won't see a performance impact. The lower the number gets (at say 1k things can be seen) the worse it gets. In general if you get the 1 issue/40k packets or greater that is a pretty clean network. On Fri, May 25, 2018 at 10:24 AM, Thomas Dineen <tdineen@xxxxxxxxxxxxx> wrote: > Alex: > > Trivial answer: Slow Server drops packets. It takes a lot of server horse > power > > to process a 1GB wire speed flow of packets. > > Thomas Dineen > > > On 5/24/2018 6:43 PM, Alex wrote: >> >> Hi, >> >> Can someone explain why an interface would start showing dropped >> packets and overruns? I have about six machines on a local LAN (the IP >> is associated with the br0 device), and all have at least some amount >> of dropped packets. This is one example from one of the machines on >> the LAN; the LAN interface on the gateway machine is very similar. >> >> eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 >> inet6 fe80::ec4:7aff:fe7a:73f4 prefixlen 64 scopeid 0x20<link> >> ether 0c:c4:7a:7a:73:f4 txqueuelen 1000 (Ethernet) >> RX packets 2294973231 bytes 1227551884960 (1.1 TiB) >> RX errors 0 dropped 159933 overruns 2252 frame 0 >> TX packets 2707484667 bytes 1948072588485 (1.7 TiB) >> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 >> device memory 0xc7200000-c727ffff >> >> I recently rebooted the gateway and noticed it there first. It's a >> fedora25 system acting as a gateway with shorewall. The LAN side is a >> 1Gbs ethernet on a gigabit switch. The WAN side is a 10mbit ethernet >> link in a colo. I suspect this machine is the cause, as nothing's >> changed on the LAN machines for a while, and the dropped packet count >> isn't incrementing fast enough to coincide with greater than 1TB of >> traffic. >> >> I have IPMI access to the machines on the LAN, so can do testing, but >> I don't have IPMI access to the gateway, so can't really do much >> without having to drive to the colo first. >> >> What's the typical cause of these errors? I thought it was perhaps the >> duplex mode or other link setting, but they all appear to be the same >> (1000/full). >> >> There aren't any dropped packets or overruns on the WAN interface on >> the gateway, but could some signal or other data from the WAN side be >> causing this? >> >> I can run wireshark or something similar, but it's been a while, so if >> that's your recommendation, I'd really appreciate it if you could >> provide specific traces you'd think were best. >> >> Ideas greatly appreciated. >> Thanks, >> Alex >> _______________________________________________ >> users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx >> To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx >> Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html >> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines >> List Archives: >> https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx/message/7TBB4E3KEFI2Z4XGZ6PCXJDKIMPZNROK/ > > _______________________________________________ > users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx > To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx/message/C52WTUFEND7WNJ3A2UKQ2MQ5Y3TRXRXG/ _______________________________________________ users mailing list -- users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to users-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/users@xxxxxxxxxxxxxxxxxxxxxxx/message/LFLGERJROSK7AAKOUSW7YTM6MMHDFQ2M/