On Fri, Apr 2, 2010 at 6:14 PM, Richard Scobie <richard@xxxxxxxxxxx> wrote: > Mark Knecht wrote: > >> Once all of that is in place then possibly more cores will help, but I >> suspect even then it probably hard to use 4 billion CPU cycles/second >> doing nothing but disk I/O. SATA controllers are all doing DMA so CPU >> overhead is relatively *very* low. > > There is the RAID5/6 parity calculations to be considered on writes and this > appears to be single threaded. There is an experimental multicore kernel > option I believe, but recent discussion indicates there may be some problems > with it. > > A very quick test on a box here on a Xeon E5440 (4 x 2.8GHz) and a SAS > attached 16 x 750GB SATA md RAID6. The array is 72% full and probably quite > fragmented and currently the system is idle. > > dd if=/dev/zero of=/mnt/storage/dump bs=1M count=20000 > 20000+0 records in > 20000+0 records out > 20971520000 bytes (21 GB) copied, 87.2374 s, 240 MB/s > > Looking at the outputs of vmstat 5 and mpstat -P ALL 5 during this, one core > (probably doing parity generation) was around 7.56% idle and the other 3 > were around 88.5, 67.5 and 51.8% idle. > > The same test run when the system was commissioned and the array was empty, > acheived 565MB/s writes. > > Regards, > > Richard Richard, Good point. I was limited in my thinking to the sorts of arrays I might use at home being no wider than 3, 4 or 5 disks. However for our N-wide array as N approaches infinity so do the cycles required to run it. I don think that applies to the OP but I don't know that. Thanks for making the point. Cheers, Mark -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html