RE: 3Ware 9550SX and latency/system responsiveness

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



At 13:26 -0400 25/9/07, Ross S. W. Walker wrote:
Off of 3ware's support site I was able to download and compile the
latest stable release which has this modinfo:

[root@mfg-nyc-iscsi1 driver]# modinfo 3w-9xxx.ko
filename:       3w-9xxx.ko
version:        2.26.06.002-2.6.18

OK, driver source from the 9.4.1.3 codeset (3w-9xxx-2.6.18kernel_9.4.1.3.tgz) now built and installed for RHEL5, new initrd created and machine re-tested.

[root@serv1 ~]# modinfo 3w-9xxx
filename:       /lib/modules/2.6.18-8.el5/kernel/drivers/scsi/3w-9xxx.ko
version:        2.26.06.002-2.6.18
license:        GPL
description:    3ware 9000 Storage Controller Linux Driver
author:         AMCC
srcversion:     7F428E7BA74EAFF0FF137E2
alias:          pci:v000013C1d00001004sv*sd*bc*sc*i*
alias:          pci:v000013C1d00001003sv*sd*bc*sc*i*
alias:          pci:v000013C1d00001002sv*sd*bc*sc*i*
depends:        scsi_mod
vermagic:       2.6.18-8.el5 SMP mod_unload 686 REGPARM 4KSTACKS gcc-4.1

tw_cli output just to be sure:

//serv1> /c0 show all
/c0 Driver Version = 2.26.06.002-2.6.18
/c0 Model = 9550SX-8LP
/c0 Memory Installed  = 112MB
/c0 Firmware Version = FE9X 3.08.02.007
/c0 Bios Version = BE9X 3.08.00.002
/c0 Monitor Version = BL9X 3.01.00.006

Well bottom line, there is something very wrong with the 3ware
drivers on the RHEL 5 implementation.

There still is then, because the figures for the LTP disktest are almost identical post-update.

Sequential reads:

RHEL5, RAID 0:
| 2007/09/26-09:11:27 | START | 2962 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p l -P T -T 30 -r (-N 976519167) (-c) (-p u) | 2007/09/26-09:11:57 | STAT | 2962 | v1.2.8 | /dev/sdb | Total read throughput: 2430429.9B/s (2.32MB/s), IOPS 593.4/s.

RHEL5, RAID 1:
| 2007/09/26-09:59:41 | START | 3210 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p l -P T -T 30 -r (-N 488259583) (-c) (-p u) | 2007/09/26-10:00:11 | STAT | 3210 | v1.2.8 | /dev/sdb | Total read throughput: 2566280.5B/s (2.45MB/s), IOPS 626.5/s.

Sequential writes:

RHEL5, RAID 0:
| 2007/09/26-09:11:57 | START | 2971 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p l -P T -T 30 -w (-N 976519167) (-c) (-p u) | 2007/09/26-09:12:27 | STAT | 2971 | v1.2.8 | /dev/sdb | Total write throughput: 66337450.7B/s (63.26MB/s), IOPS 16195.7/s.

RHEL5, RAID 1:
| 2007/09/26-10:00:11 | START | 3217 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p l -P T -T 30 -w (-N 488259583) (-c) (-p u) | 2007/09/26-10:00:41 | STAT | 3217 | v1.2.8 | /dev/sdb | Total write throughput: 54108160.0B/s (51.60MB/s), IOPS 13210.0/s.

Random reads:

RHEL5, RAID 0:
| 2007/09/26-09:12:28 | START | 2978 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p r -P T -T 30 -r (-N 976519167) (-c) (-D 100:0) | 2007/09/26-09:12:57 | STAT | 2978 | v1.2.8 | /dev/sdb | Total read throughput: 269206.1B/s (0.26MB/s), IOPS 65.7/s.

RHEL5, RAID 1:
| 2007/09/26-10:00:41 | START | 3231 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p r -P T -T 30 -r (-N 488259583) (-c) (-D 100:0) | 2007/09/26-10:01:11 | STAT | 3231 | v1.2.8 | /dev/sdb | Total read throughput: 262144.0B/s (0.25MB/s), IOPS 64.0/s.

Random writes:

RHEL5, RAID 0:
| 2007/09/26-09:12:57 | START | 2987 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p r -P T -T 30 -w (-N 976519167) (-c) (-D 0:100) | 2007/09/26-09:13:34 | STAT | 2987 | v1.2.8 | /dev/sdb | Total write throughput: 1378440.5B/s (1.31MB/s), IOPS 336.5/s.

RHEL5, RAID 1:
| 2007/09/26-10:01:12 | START | 11539 | v1.2.8 | /dev/sdb | Start args: -B 4k -h 1 -I BD -K 4 -p r -P T -T 30 -w (-N 488259583) (-c) (-D 0:100) | 2007/09/26-10:01:41 | STAT | 11539 | v1.2.8 | /dev/sdb | Total write throughput: 638976.0B/s (0.61MB/s), IOPS 156.0/s.

I re-ran the tests again, just to be sure (same order as above - SeqR, SeqW, RandomR, RandomW):

RAID 0:
SR| 2007/09/26-10:16:53 | STAT | 4602 | v1.2.8 | /dev/sdb | Total read throughput: 2456328.8B/s (2.34MB/s), IOPS 599.7/s. SW| 2007/09/26-10:17:23 | STAT | 4611 | v1.2.8 | /dev/sdb | Total write throughput: 66434662.4B/s (63.36MB/s), IOPS 16219.4/s. RR| 2007/09/26-10:17:53 | STAT | 4618 | v1.2.8 | /dev/sdb | Total read throughput: 273612.8B/s (0.26MB/s), IOPS 66.8/s. RW| 2007/09/26-10:18:31 | STAT | 4626 | v1.2.8 | /dev/sdb | Total write throughput: 1424701.8B/s (1.36MB/s), IOPS 347.8/s.

RAID 1:
SR| 2007/09/26-10:12:49 | STAT | 4509 | v1.2.8 | /dev/sdb | Total read throughput: 2479718.4B/s (2.36MB/s), IOPS 605.4/s. SW| 2007/09/26-10:13:19 | STAT | 4516 | v1.2.8 | /dev/sdb | Total write throughput: 53864721.1B/s (51.37MB/s), IOPS 13150.6/s. RR| 2007/09/26-10:13:49 | STAT | 4525 | v1.2.8 | /dev/sdb | Total read throughput: 268151.5B/s (0.26MB/s), IOPS 65.5/s. RW| 2007/09/26-10:14:19 | STAT | 4532 | v1.2.8 | /dev/sdb | Total write throughput: 549287.7B/s (0.52MB/s), IOPS 134.1/s.

Baffled, I am.

S.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux