Re: Performance with ethernet channel bonding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> The default 32K window is too small for fast networks (fast ethernet and
up,
> or anything with longer latency like round robin setups). You can change
it
> using the /proc/sys/net/core/[rw]mem_{default,max} sysctls (see
socket(7))

The results from more testing. We tried the following settings for the
items rmem_default, rmem_max, etc.
 - 96k (98304)
 - 128k (131072)
 - 192k (196608)
Values were the same on both systems under test. No significant changes in
results. We saw the same kind of results [worst drop off at about 7200 byte
messages, drop to about 10% expected throughput]. Should I try larger sizes
[say 1meg] before giving up on this?

Is there something else I can try such as settings for the congestion
avoidance?

Thanks.
--Mark H Johnson
  <mailto:Mark_H_Johnson@raytheon.com>


-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux