RE: CentOS 4.4 e1000 and wire-speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> -----Original Message-----
> From: centos-bounces@xxxxxxxxxx 
> [mailto:centos-bounces@xxxxxxxxxx] On Behalf Of Ross S. W. Walker
> Sent: Saturday, December 30, 2006 6:37 PM
> To: centos@xxxxxxxxxx
> Subject:  CentOS 4.4 e1000 and wire-speed
> 
> 
> Currently I'm running CentOS 4.4 on a Dell Poweredge 850 with an Intel
> Pro/1000 Quad-port adapter.
> 
> I seem to be able to only achieve 80% utilization on the 
> adapter, while
> on the same box running Fedora Core 5 I was able to reach 99%
> utilization.
> 
> I am using iSCSI Enterprise Target as my application and I am 
> using the
> nullio feature, it just discards any write and sends back random data
> for any read, for my bandwidth test. The IO pattern I use is 1MB block
> sequentials.
> 
> Is there something in the OS that is throttling the bandwidth as it
> seems to be capped at 80%.
> 
> I have selinux and iptables disabled, and I have tuned the 
> TCP/IP stack
> to mimic the settings under 2.6.17, except bumped up the default IP
> send/receiver buffer size for improved UDP transmission over 1Gbps.
> 
> The CPU is a P4 Dual Core 3GHz, not top of the line but 
> adequate for my
> needs (strictly block io).
> 
> Here are the TCP/IP tunables from my sysctl.conf:
> 
> # Controls default receive buffer size (bytes)
> net.core.rmem_default = 262144
> 
> # Controls IP default send buffer size (bytes)
> net.core.wmem_default = 262144
> 
> # Controls IP maximum receive buffer size (bytes)
> net.core.rmem_max = 262144
> 
> # Controls IP maximum send buffer size (bytes)
> net.core.wmem_max = 262144
> 
> # Controls TCP memory utilization (pages)
> net.ipv4.tcp_mem = 49152 65536 98304
> 
> # Controls TCP sliding receive window buffer (bytes)
> net.ipv4.tcp_rmem = 4096 87380 16777216
> 
> # Controls TCP sliding send window buffer (bytes)
> net.ipv4.tcp_wmem = 4096 65536 16777216

Let me follow up my own post by saying that the culprit has been found
and it had nothing to do with CentOS.

The culprit was actually the Broadcom 5708S adapters on the initiators.
An upgrade of the drivers fixed the issue where it would throttle the
bandwidth sending and receiving. These drivers are extremely hard to
manage as they are divided into 2 parts in order for the TOE engine to
work with the new Microsoft Scalable Network Package. The TOE engine by
the way does not handle iSCSI, so needs to be disabled for me, they do
sell an iSCSI engine separately though for the same adapter (strangely
enough).

-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux