Re: With 4 disks should I go for RAID 5 or RAID 10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg Smith wrote:
On Thu, 27 Dec 2007, Shane Ambler wrote:

So in theory a modern RAID 1 setup can be configured to get similar read speeds as RAID 0 but would still drop to single disk speeds (or similar) when writing, but RAID 0 can get the faster write performance.

The trick is, you need a perfect controller that scatters individual reads evenly across the two disks as sequential reads move along the disk to pull this off, bouncing between a RAID 1 pair to use all the bandwidth available. There are caches inside the disk, read-ahead strategies as well, and that all has to line up just right for a single client to get all the bandwidth. Real-world disks and controllers don't quite behave well enough for that to predictably deliver what you might expect from theory. With RAID 0, getting the full read speed of 2Xsingle drive is much more likely to actually happen than in RAID 1.

Kind of makes the point for using 1+0

So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could deliver 1200MB/s of data to RAM, which is also assuming that all 4 channels have their own data path to RAM and aren't sharing.

OK, first off, beyond the occasional trivial burst you'll be hard pressed to ever sustain over 60MB/s out of any single SATA drive. So the theoretical max 4-channel speed is closer to 240MB/s.

A regular PCI bus tops out at a theoretical 133MB/s, and you sure can saturate one with 4 disks and a good controller. This is why server configurations have controller cards that use PCI-X (1024MB/s) or lately PCI-e aka PCI/Express (250MB/s for each channel with up to 16 being common). If your SATA cards are on a motherboard, that's probably using

So I guess as far as performance goes your motherboard will determine how far you can take it.

(talking from a db only server view on things)

A PCI system will have little benefit from more than 2 disks but would need 4 to get both reliability and performance.

PCI-X can benefit from up to 17 disks

PCI-e (with 16 channels) can benefit from 66 disks

The trick there will be dividing your db over a large number of disk sets to balance the load among them (I don't see 66 disks being setup in one array), so this would be of limited use to anyone but the most dedicated DBA's.

For most servers these days the number of disks are added to reach a performance level not a storage requirement.

While your numbers are off by a bunch, the reality for database use means these computations don't matter much anyway. The seek related behavior drives a lot of this more than sequential throughput, and decisions like whether to split out the OS or WAL or whatever need to factor all that, rather than just the theoretical I/O.


So this is where solid state disks come in - lack of seek times (basically) means they can saturate your bus limits.


--

Shane Ambler
pgSQL (at) Sheeky (dot) Biz

Get Sheeky @ http://Sheeky.Biz

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

              http://archives.postgresql.org

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux