Re: Performance with ethernet channel bonding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 28, 2000 at 04:14:42PM -0500, Mark_H_Johnson@Raytheon.com wrote:
> 
> > The default 32K window is too small for fast networks (fast ethernet and
> up,
> > or anything with longer latency like round robin setups). You can change
> it
> > using the /proc/sys/net/core/[rw]mem_{default,max} sysctls (see
> socket(7))
> 
> The results from more testing. We tried the following settings for the
> items rmem_default, rmem_max, etc.
>  - 96k (98304)
>  - 128k (131072)
>  - 192k (196608)
> Values were the same on both systems under test. No significant changes in
> results. We saw the same kind of results [worst drop off at about 7200 byte
> messages, drop to about 10% expected throughput]. Should I try larger sizes
> [say 1meg] before giving up on this?

Don't forget to restart inetd or the independently running server to make
it pick up the bigger sizes.

> Is there something else I can try such as settings for the congestion
> avoidance?

You could do a tcpdump and run the tcptrace tool over it, that should
tell you if you have extensive retransmits.


-Andi
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux