On Thu, 2006-08-24 at 13:57, Merlin Moncure wrote: > On 8/24/06, Jeff Davis <pgsql@xxxxxxxxxxx> wrote: > > On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote: > > > On 8/22/06, Jeff Davis <pgsql@xxxxxxxxxxx> wrote: > > > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote: > > > it's not the parity, it's the seeking. Raid 5 gives you great > > > sequential i/o but random is often not much better than a single > > > drive. Actually it's the '1' in raid 10 that plays the biggest role > > > in optimizing seeks on an ideal raid controller. Calculating parity > > > was boring 20 years ago as it inolves one of the fastest operations in > > > computing, namely xor. :) > > > > Here's the explanation I got: If you do a write on RAID 5 to something > > that is not in the RAID controllers cache, it needs to do a read first > > in order to properly recalculate the parity for the write. > > it's worse than that. if you need to read something that is not in > the o/s cache, all the disks except for one need to be sent to a > physical location in order to get the data. Ummmm. No. Not in my experience. If you need to read something that's significantly larger than your stripe size, then yes, you'd need to do that. With typical RAID 5 stripe sizes of 64k to 256k, you could read 8 to 32 PostgreSQL 8k blocks from a single disk before having to move the heads on the next disk to get the next part of data. A RAID 5, being read, acts much like a RAID 0 with n-1 disks. It's the writes that kill performance, since you've got to read two disks and write two disks for every write, at a minimum. This is why small RAID 5 arrays bottleneck so quickly. a 4 disk RAID 4 with two writing threads is likely already starting to thrash. Or did you mean something else by that?