On Aug 29, 2008, at 4:43 AM, Greg Smith wrote:
On Thu, 21 Aug 2008, Christiaan Willemsen wrote:
Anyway, I'm going to return the controller, because it does not
scale very well with more that 4 disks in raid 10. Bandwidth is
limited to 350MB/sec, and IOPS scale badly with extra disks...
How did you determine that upper limit? Usually it takes multiple
benchmark processes running at once in order to get more than 350MB/
s out of a controller. For example, if you look carefully at the
end of http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/
you can see that Joshua had to throw 8 threads at the disks in
order to reach maximum bandwidth.
I used IOmeter to do some tests, with 50 worker thread doing jobs. I
can get more than 350 MB/sec, I'll have to use huge blocksizes
(something like 8 MB). Even worse is random read and 70%read, 50%
random tests. They don't scale at all when you add disks. A 6 disk
raid 5 is exactly as fast as a 12 disk raid 10 :(
The idea for xlog + os on 4 disk raid 10 and the rest for the data
sound good
I would just use a RAID1 pair for the OS, another pair for the xlog,
and throw all the other disks into a big 0+1 set. There is some
value to separating the WAL from the OS disks, from both the
performance and the management perspectives. It's nice to be able
to monitor the xlog write bandwidth rate under load easily for
example.
Yes, that's about what I had in mind.
Kind regards,
Christiaan