Tiny eepro100 vs. e100 NFS benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

To whom it may interest:

This is just a quick benchmark I did earlier today - it may interest
some, it may not.  I'm not requesting any help or action to be taken -
I'm just publishing results which I had produced anyway, in the hope
that some might find them useful.

Here goes:

I have two boxes, both SMP, both with two Intel eepro100 cards in them.
One box is NFS server, the other is client.

Both machines run unpatched 2.4.20.

Running tiotest, three runs, 1024 meg file (both server and client has
512 MB memory), I get the following results:


eepro100 on server and client, both with bonding
----------------------------

Using the eepro100 driver and the bonding driver (and a switch which has
no explicit support for bonding - yet sufficient support that it does
not degrade performance in the common case, and in some cases improve
performance (although not for NFS)):

./tiobench.pl --size 1024 --numruns 3 --threads 1 --threads 2 --threads 4
Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

         File   Block  Num  Seq Read    Rand Read   Seq Write  Rand Write
  Dir    Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
------- ------ ------- --- ----------- ----------- ----------- -----------
   .     1024   4096    1  10.39 6.39% 0.412 0.90% 8.466 7.51% 0.535 0.62%
   .     1024   4096    2  9.530 6.03% 0.519 1.28% 8.856 8.73% 0.545 0.69%
   .     1024   4096    4  8.675 5.48% 0.616 1.43% 8.839 9.93% 0.650 0.94%


eepro100 on server, e100 on client, one dedicated link
----------------------------

Here, a crossover cable is connecting the two machines - it is used
solely for NFS traffic. All other traffic goes over the other NICs which
remain connected to the switch.

The e100 driver is not used on the client NICs.

./tiobench.pl --size 1024 --numruns 3 --threads 1 --threads 2 --threads 4
Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

         File   Block  Num  Seq Read    Rand Read   Seq Write  Rand Write
  Dir    Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
------- ------ ------- --- ----------- ----------- ----------- -----------
   .     1024   4096    1  9.721 10.3% 0.357 1.17% 6.416 11.0% 3.831 8.91%
   .     1024   4096    2  8.690 8.15% 0.413 1.16% 7.381 11.3% 1.103 2.16%
   .     1024   4096    4  8.349 7.51% 0.485 1.50% 7.759 11.8% 0.769 1.47%

eepro100 on server, eepro100 on client, one dedicated link
----------------------------

Same setup as before, only now the client also uses the eepro100 driver.

 ./tiobench.pl --size 1024 --numruns 3 --threads 1 --threads 2 --threads 4
Size is MB, BlkSz is Bytes, Read, Write, and Seeks are MB/sec

         File   Block  Num  Seq Read    Rand Read   Seq Write  Rand Write
  Dir    Size   Size   Thr Rate (CPU%) Rate (CPU%) Rate (CPU%) Rate (CPU%)
------- ------ ------- --- ----------- ----------- ----------- -----------
   .     1024   4096    1  8.058 6.04% 0.448 1.47% 9.265 12.0% 1.442 1.47%
   .     1024   4096    2  8.021 5.54% 0.517 1.50% 9.283 12.1% 1.044 1.48%
   .     1024   4096    4  7.650 5.17% 0.599 1.61% 9.290 13.0% 1.337 2.16%


Quick conclusion:

Using the eepro100 driver results in comparable sequential read
performance of the e100 driver.  However, the eepro100 driver gives
measurably and noticably better NFS write performance.

Seems like e100 has a problem with filling a 100Mbit line TX, while RX
is better than eepro100. At least for RPC over UDP in this isolated
case...

-- 
................................................................
:   jakob@unthought.net   : And I see the elder races,         :
:.........................: putrid forms of man                :
:   Jakob Østergaard      : See him rise and claim the earth,  :
:        OZ9ABN           : his downfall is at hand.           :
:.........................:............{Konkhra}...............:
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux