Hi Mark, On 12/16/2015 07:24 PM, Mark Knecht wrote: > On Wed, Dec 16, 2015 at 7:31 AM, Dallas Clement > <dallas.a.clement@xxxxxxxxx <mailto:dallas.a.clement@xxxxxxxxx>> wrote: > > Phil, the 16k chunk size has really given a boost to my RAID 5 > sequential write performance measured with fio, bs=1408k. > > This is what I was getting with a 128k chunk size: > > iodepth=4 => 605 MB/s > iodepth=8 => 589 MB/s > iodepth=16 => 634 MB/s > iodepth=32 => 635 MB/s > > But this is what I'm getting with a 16k chunk size: > > iodepth=4 => 825 MB/s > iodepth=8 => 810 MB/s > iodepth=16 => 851 MB/s > iodepth=32 => 866 MB/s Very interesting. Good to see hypotheses supported by results. > Dallas, > Hi. Just for kicks I tried Phil's idea (I think it was Phil) and :-) > sampled stripe_cache_active > by putting this command in a 1 second loop and running it today while I > worked. > > cat /sys/block/md3/md/stripe_cache_active >> testCacheResults > > My workload is _very_ different from what you're working on. This is a > high-end desktop > machine (Intel 980i Extreme processor, 24GB DRAM, RAID6) running 2 > Windows 7 VMs > while I watch the stock market and program in MatLab. None the less I > was somewhat > surprise at the spread in the number of active lines. The test ran for > about 10 hours with > about 94% of the results being 0, but numbers ranging from 1 line to > 2098 lines active > at a single time. Also interesting to me was when that 2098 value hit it > was apparently > all clear in less than 1 second as the values immediately following > where back to 0. Yeah, latencies are pretty low. One-second samples will be fairly random snapshots under most conditions. Consider sampling much faster, but building one-minute histograms and recording those. Phil -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html