On Sun, Aug 30, 2009 at 4:40 PM, Merlin Moncure<mmoncure@xxxxxxxxx> wrote: > For random writes, raid 5 has to write a minimum of two drives, the > data being written and parity. Raid 10 also has to write two drives > minimum. A lot of people think parity is a big deal in terms of raid > 5 performance penalty, but I don't -- relative to the what's going on > in the drive, xor calculation costs (one of the fastest operations in > computing) are basically zero, and off-lined if you have a hardware > raid controller. The cost is that in order to calculate the parity block the RAID controller has to *read* in either the old data block being overwritten and the old parity block or all the other data blocks which participate in that parity block. So every random write becomes not just two writes but two reads + two writes. If you're always writing large sequential hunks at a time then this is minimized because the RAID controller can just calculate the new parity block for the whole new hunk. But if you often just seek to random places in the file and overwrite 8k at a time then things go very very poorly. -- greg http://mit.edu/~gsstark/resume.pdf -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance