On 12/26/05, David Lang <dlang@xxxxxxxxxxxx> wrote: > raid5 writes n+1 blocks not n+n/2 (unless n=2 for a 3-disk raid). you can > have a 15+1 disk raid5 array for example > > however raid1 (and raid10) have to write 2*n blocks to disk. so if you are > talking about pure I/O needed raid5 wins hands down. (the same 16 drives > would be a 8+8 array) > > what slows down raid 5 is that to modify a block you have to read blocks > from all your drives to re-calculate the parity. this interleaving of > reads and writes when all you are logicly doing is writes can really hurt. > (this is why I asked the question that got us off on this tangent, when > doing new writes to an array you don't have to read the blocks as they are > blank, assuming your cacheing is enough so that you can write blocksize*n > before the system starts actually writing the data) Not exactly true. Let's assume you have a 4+1 RAID5 (drives A, B, C, D and E), and you want to update drive A. Let's assume the parity is stored in this particular write on drive E. One way to write it is: write A, read A, B, C, D, combine A+B+C+D and write it E. (4 reads + 2 writes) The other way to write it is: read oldA, read old parity oldE write newA, write E = oldE + (newA-oldA) -- calculate difference between new and old A, and apply it to old parity, then write (2 reads + 2 writes) The more drives you have, the smarter it is to use the second approach, unless of course A, B, C and D are available in the cache, which is the niciest situation. Regards, Dawid