> > The question I have for the list, given my large drive sizes, it > > takes me a day to set up and build an mdraid/lvm configuration. > > Has anybody found the "sweet spot" for how many partitions per > > drive? I now have a script to generate the drive partitions, a > > script for building the mdraid volumes, and a procedure for > > unwinding from all of this and starting again. I don't have a feeling for the sweet spot on the number of partitions, but if you put too many devices in a raid5/6 array, you virtually guarantee all writes will have to be read-modify-write writes instead of full stripe writes. So, when dealing with keeping the parity on the array in sync, a full stripe write allows you to simply write all blocks in the stripe, calculate the parity as you do so, and then write the parity out. For a partial stripe write, you either have to read in the blocks you aren't writing and then treat it as a full stripe write and calculate the parity, or you have to read in the blocks being written and the current parity block, xor the blocks being over written out of the existing parity block and then xor the blocks you are writing over the old ones into the parity block, then write the new blocks and new parity out. For that reason, I usually try to keep my arrays to no more than 7 or 8 members. A lot of times, for streaming testing, really high numbers of drives in a parity raid array will seem to perform fine, but when under real world conditions might not do so well. There are also several filesystems that will optimize their metadata layout when put on an mdraid device (xfs and ext4), but I'm pretty sure that gets blocked when you put lvm between the filesystem and the mdraid device. -- Doug Ledford <dledford@xxxxxxxxxx> GPG KeyID: B826A3330E572FDD Fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
Attachment:
signature.asc
Description: This is a digitally signed message part