Re: Optimize RAID0 for max IOPS?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jaap Crezee wrote:
: On 01/19/11 09:18, Stefan /*St0fF*/ Hübner wrote:
: >Am 19.01.2011 08:11, schrieb Wolfgang Denk:
: >Lol - I wouldn't have answered in the first place if I didn't have any
: >expertise.  So suit yourself - as you don't bring up any real numbers
: >(remember: you've got the weird setup, you asked, you don't have enough
: >money for the enterprise solution - so ...) nobody who worked with 3ware
: >controllers will believe you.
: 
: Here's one: I switched from 3ware hardware based raid to linux software 
: raid and I am getting better throughputs. I had a 3ware PCI-X car (don't 
: know which type by hearth).
: Okay, to be honest I did not have a (enterprise solution?) 
: battery-backup-unit. So probably no write caching...
: 
	A "me too": 3ware 9550SX with 8 drives, RAID-5. The performance
(especially latency) was very bad. After I switched to the md SW RAID
and lowered the TCQ depth in the 3ware controller to 16[*], the filesystem
and latency feels much faster.

	The only problem I had was a poor interaction of the CFQ
iosched with the RAID-5 rebuild process, but I have fixed this
by moving to deadline I/O scheduler.

	Another case was the LSI SAS 2008 (I admit it is a pretty low-end
HW RAID controller): 10 disks WD RE4 black 2TB in HW and SW RAID-10
configurations:

time mkfs.ext4 /dev/md0  # SW RAID
real	8m4.783s
user	0m9.255s
sys	2m30.107s

time mkfs.ext4 -F /dev/sdb # HW RAID
real	22m13.503s
user	0m9.763s
sys	2m51.371s

	The problem with HW RAID is that today's computers can dedicate tens
of gigabytes to buffer cache, which allows the I/O scheduler to reorder
requests based on latency and other criteria, which no RAID controller
can match, because it cannot see which requests are latency-critical
and which are not.

	Also, Linux I/O scheduler works really hard to keep all spindles
busy, while when you fill the TC queue of a HW RAID volume with requests
which maps to one or small number of physical disks, there is no way the
controller can tell "send me more requests, but not from this area
of the HW RAID volume".

[*] 3ware driver is especially bad here, because its default queue
	depth is 1024, IIRC, which makes the whole I/O scheduler
	with queue size 512 a no-op. Think bufferbloat in the storage area.

-- 
| Jan "Yenya" Kasprzak  <kas at {fi.muni.cz - work | yenya.net - private}> |
| GPG: ID 1024/D3498839      Fingerprint 0D99A7FB206605D7 8B35FCDE05B18A5E |
| http://www.fi.muni.cz/~kas/    Journal: http://www.fi.muni.cz/~kas/blog/ |
Please don't top post and in particular don't attach entire digests to your
mail or we'll all soon be using bittorrent to read the list.     --Alan Cox
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux