On Sun, Mar 31, 2013 at 5:28 PM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote: > On 3/31/2013 12:56 PM, Mark Knecht wrote: >> On Sun, Mar 31, 2013 at 10:41 AM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote: >>> On 3/31/2013 12:15 PM, Mark Knecht wrote: >> <SNIP> >>>> >>>> Hopefully that gives you enough info to suggest a direction. >>> >>> These applications append small data slowly over a long period of time, >>> which usually means fragmentation. Thus there's not much to optimize at >>> the chunk/stripe level, other than keeping chunk size small to spread >>> random reads over all platters. You currently have a 16KB chunk, IIRC, >>> which is about as good as you'll get for this workload. Given your >>> applications' low write throughput chunk/strip really doesn't matter. >>> >>> -- >>> Stan >>> >> >> OK, I cannot argue with your conclusions and will stick with 16K for now. >> >> Presumably if any improvement is to be made here its getting >> everything onto a single partition instead of multiple RAIDs on the >> same drives which then reduces the physical overhead (moving heads to >> different partitions) and allows the md software to do the heavy >> lifting? > > Your write IO rate appears to be so low that it really makes no > difference. I'd guess you could run all of this from a single fast disk > drive (10/15K or SSD) without skipping a beat. > > -- > Stan > > So maybe the idea I had awhile back about moving the VMs to the SSD - the VMs are about 90GB, the SSD is 128GB - and then at the end of every day just copying the VMs over to the RAID as a backup - would be a better way to run? There was a thread here sometime ago about using an SSD as a cache for RAID. I suppose that's a possibility but it sounds like more complexity than I need or want. Thanks, Mark -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html