On Thu, Dec 23, 2010 at 12:47 PM, Jeff Moyer <jmoyer@xxxxxxxxxx> wrote: > Rogier Wolff <R.E.Wolff@xxxxxxxxxxxx> writes: > >> On Thu, Dec 23, 2010 at 09:40:54AM -0500, Jeff Moyer wrote: >>> > In my performance calculations, 10ms average seek (should be around >>> > 7), 4ms average rotational latency for a total of 14ms. This would >>> > degrade for read-modify-write to 10+4+8 = 22ms. Still 10 times better >>> > than what we observe: service times on the order of 200-300ms. >>> >>> I didn't say it would account for all of your degradation, just that it >>> could affect performance. I'm sorry if I wasn't clear on that. >> >> We can live with a "2x performance degradation" due to stupid >> configuration. But not with the 10x -30x that we're seeing now. > > Wow. I'm not willing to give up any performance due to > misconfiguration! I suspect a mailserver on a raid 5 with large chunksize could be a lot worse than 2x slower. But most of the blame is just raid 5. ie. write 4K from userspace Kernel Read old primary data, wait for data to actually arrive Read old parity data, wait again modify both for new data write primary data to drive queue write parity data to drive queue userspace: fsync kernel: force data from queues to drive (requires wait) I'm guessing raid1 or raid10 would be several times faster. And is at least as robust as raid 5. ie. write 4K from userspace Kernel write 4K to first mirror's queue write 4K to second mirror's queue done userspace: fsync kernel: force data from queues to drive (requires wait) Good Luck Greg -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html