Re: Heavy loaded cards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I would like to hear your comments to these findings.

It seems that in bsd, and other operating sustems, you can change certain
values to improve the performace of the ethernet cards when used in a
heavy network situation.

For example I found this site http://www.psc.edu/networking/perf_tune.html
that discusses in detail how to do that.  My question is this.  Does
changing the size of the buffer in

/proc/sys/net/core/rmem_default   - default receive window
/proc/sys/net/core/rmem_max       - maximum receive window
/proc/sys/net/core/wmem_default  - default send window 
/proc/sys/net/core/wmem_max      - maximum send window

improve the performance in Linux?  have anyone done this on their
servers?  The document also describes an algorithm that autonegotiaites
the best size for transfering data, and Linux does not seem to have that
support built in. In your opinion is it worth investigating this and
tweaking the linux servers??


Now in terms of my previous post I have the following comment:

The current network infrastructure that we have are using has not changed
in a some time, however our traffic has.  The host in question has a load
factor of less than 1, and it is producing overruns like crazy.  I can not
attribute this problem to slugish machine, because of the low cpu load. It
was suggested that the machine may have an older version of ifconfig.  So
looking at the /proc I am inclined to believe that he is very right.

After editing the output of "cat /proc/net/dev" to fit the screen 
we get
Inter-|   Receive
 face |bytes    packets errs drop fifo frame compressed multicast
    lo:   23416     262    0    0    0     0          0         0
  eth0:1068796065 9367666  0    0    0     0          0         0
  eth1:  268139    4196    0    0    1     0          0         0


Transmit
bytes    packets errs drop fifo colls carrier compressed
lo:   23416     262    0    0    0     0       0          0
eth0: 3766429230 13754583    2    0    0     0       2          0
eth1: 1008      24    0    0    0     0       0          0

while ifconfig says this.

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Bcast:127.255.255.255  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:3924  Metric:1
          RX packets:23416 errors:262 dropped:0 overruns:0
          TX packets:0 errors:0 dropped:0 overruns:23416

eth0      Link encap:10Mbps Ethernet  HWaddr 00:20:78:E0:A2:52
          inet addr:222.222.8.118  Bcast:222.222.8.127 Mask:255.255.255.240
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1074032897 errors:9413029 dropped:0 overruns:0
          TX packets:0 errors:0 dropped:0 overruns:2147483647
          Interrupt:10 Base address:0xe000

eth1      Link encap:10Mbps Ethernet  HWaddr 00:01:02:46:08:B5
          inet addr:222.222.8.125  Bcast:222.222.8.127 Mask:255.255.255.240
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:268319 errors:4196 dropped:0 overruns:0
          TX packets:0 errors:0 dropped:0 overruns:1008
          Interrupt:12 Base address:0xdc00


And lastly.  For kernerls 2.0.36 and 2.14+ what is the most stable
driver/ethernet vendor that you recommend from your expereience.  Our
servers have 3com 3c59x, realtek, smc, etc..  Some work great and other
shut down under heavy load.  I would love to stick to one vendor with a
mature driver.  A lot of DoS attacks are targetting this flaw, and end up
bringing the interface to its knees.

Thanks in advance for any feedback, and if I am on the wrong email list,
please let me know, which mailing list would be the best to figure out
what is going on.

best regards,
Adonis


On Mon, 11 Dec 2000, Dennis wrote:

> At 05:56 PM 12/10/2000, you wrote:
> >Hi,
> >
> >We have a highly (network) loaded server, running redhat 6.2.  The card is
> >using tulip driver.
> >
> >This is the output of ifconfig (note:IPs in this post are not real numbers)
> >eth0      Link encap:10Mbps Ethernet  HWaddr 00:20:78:E0:A2:52
> >           inet addr:222.222.8.118  Bcast:222.222.8.127 Mask:255.255.255.240
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:23883054 errors:181152 dropped:0 overruns:0
> >           TX packets:0 errors:0 dropped:0 overruns:95989676
> >           Interrupt:10 Base address:0xe000
> >
> >We are getting tons of overruns, and I do not know how to  fix this
> >I tried setting the card via the /etc/conf.module, but nothing changed
> 
> Its LINUX that cant handle the load, the card is just a card. You can A) 
> use a faster machine b) reduce other tasks running on the system..for 
> example webservers, PERL scripts anything with substantial disk activity 
> that may be taking CPU cycles away from processing network traffic, or c) 
> try a different card.
> 
> Dennis
> 



-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux