Re: Gigabit ethernet cards (Pro/1000 copper)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I believe your first question has already been answered.

For your second question, as others have already mentioned, ttcp and
netperf are good tools to measure performance.  Keep in mind there is still
overhead introduced by higher layer protocols, so don't expect performance
numbers to be near media speeds.  The UDP_STREAM test in netperf shows
numbers relatively close to the actual best performance of the
driver/board.  One thing of note, I noticed upgrading from linux kernel
2.4.5 to 2.4.7 greatly improved the throughput with Intel's e1000.c driver
(both v3.0.10 and v3.0.16) on the PRO/1000F.

BTW, from where did you get Donald Becker's intel-gige.c driver?  I've
tried a cursory net search with no luck.

Bruce Allan/Beaverton/IBM
IBM Linux Technology Center - OS Gold
503-578-4187   T/L 775-4187
bruce.allan@us.ibm.com



                                                                                                                                                     
                    David Radclyfe                                                                                                                   
                    <dsr255@yahoo.com>          To:     linux-net@vger.kernel.org                                                                    
                    Sent by:                    cc:     dsr255@yahoo.com                                                                             
                    linux-net-owner@vger.       Subject:     Gigabit ethernet cards (Pro/1000 copper)                                                
                    kernel.org                                                                                                                       
                                                                                                                                                     
                                                                                                                                                     
                    08/09/2001 08:46 PM                                                                                                              
                                                                                                                                                     
                                                                                                                                                     




Hello all,

I'm testing some copper based Intel Pro/1000 cards for
use in a higher performance system. Because of the
structure of our interconnect, I'm trying two cards
per machine (on Intel STL2 motherboards with Dual
CPU). Thanks to this list and some very helpful FAQs,
I have 2 machines, both up and running with two
Gigabit cards each, and the original built in 100Mbit.
The pairs of gigabit are connected to each other and
configured as 10.0.1.x and 10.0.2.x respectively. The
cards show that they have negotiated as gigabit on
their lights, and the kernel seems to recognize them
as 1000Gbit (when you unplug and replug one of them I
get "e1000: eth1 NIC Link is uo 1000Mbps Full Duplex".
They show on a lspci -v as 66 MHz PCI as they should.
I have verified I am routing properly and all 3 cards
in the machine are working.

I have two questions/problems I was hoping for advice
on:

1) The cards show only as using 32 Mem access on
lspci, but the cards are 64 and the motherboard
supports it. I tried both the Becker and Intel
drivers. Is 64bit just not supported in the release?

2) Before trying to run any real code, I have tried
some basic performance tests -- using scp and just
blasting small (~256 byte) UDP packets from one
machine to the other and seeing how long it takes to
send 100k packets. In both cases, the gigabit cards
have only marginal performance gains over the 100Mbit
card -- a 150MB file take 13s rather than 15s to send
using scp. Maybe this is just some other bottleneck,
but, what do you guys do to measure bandwidth? I want
to convince myself I haven't missed some other setup
to make the cards work for gigabit and they are
running properly. I expected better performance. Again
I have tried both the Becker and Intel modules.

Thanks in advance!

David

__________________________________________________
Do You Yahoo!?
Make international calls for as low as $.04/minute with Yahoo! Messenger
http://phonecard.yahoo.com/
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux