On Fri Apr 03, 2009 at 10:42:20PM +0200, Goswin von Brederlow wrote: > Richard Scobie <richard@xxxxxxxxxxx> writes: > > > Goswin von Brederlow wrote: > > > >> > >> Now think about the same with 6 disk raid5. Suddenly you have partial > >> stripes. And the alignment on stripe boundaries is gone too. So now > >> you need to read 384k (I think) of data, compute or delta (whichever > >> requires less reads) the parity and write back 384k in 4 out of 6 > >> cases and read 64k and write back 320k otherwise. So on average you > >> read 277.33k and write 362.66k (= 640k combined). That is twice the > >> previous bandwidth not to mention the delay for reading. > >> > >> So by adding a drive your throughput is suddenly halfed. Reading in > >> degraded mode suffers a slowdown too. CPU goes up too. > >> > >> > >> The performance of a raid is so much dependent on its access pattern > >> that imho one can not talk about a general case. But note that the > >> more drives you have the bigger a stripe becomes and you need larger > >> sequential writes to avoid reads. > > > > I take your point, but don't filesystems like XFS and ext4 play nice > > in this scenario by combining multiple sub-stripe writes into stripe > > sized writes out to disk? > > > > Regards, > > > > Richard > > Some FS have a parameter to tune to the stripe size. If that actually > helps or not I leave for you to test. > > But ask yourself: Have any a tool to retune after you've grown the raid? > Both XFS and ext2/3 (and presumably 4 as well) allow you to alter the stripe size after growing the raid (ext2/3 via tune2fs and XFS via mount options). No idea about other filesystems though. Cheers, Robin -- ___ ( ' } | Robin Hill <robin@xxxxxxxxxxxxxxx> | / / ) | Little Jim says .... | // !! | "He fallen in de water !!" |
Attachment:
pgpBTpocrPlCA.pgp
Description: PGP signature