Hi, >>>> That's a good argument for not using "whole disk" array members, a >>>> partition can >>>> be started at a good offset and may perform better. As for the speed, >>>> since it >>>> is reconstructing the array data (hope the other drives are okay), every >>>> block >>>> written requires three blocks read and a reconstruct in cpu and memory. >>>> You can >>>> use "blockdev" to increase readahead, and set the devices to use the >>>> deadline >>>> scheduler, that _may_ improve things somewhat, but you have to read >>>> three block >>>> to write one, so it's not going to be fast. >>>> >>> >>> Read-ahead has absolutely no effect in this context. >>> >>> Read-ahead is a function of the page cache. When filling the page cache, >>> read-ahead suggests how much more to be read than has been asked for. >>> >>> resync/recovery does not use the page cache, consequently the readahead >>> setting is irrelevant. >>> >>> IO scheduler choice may make a difference. >> >> >> It's already set for cfq. I assume that would be the preferred over >> deadline? >> >> I set it on the actual disk devices. Should I also set it on md0/1 >> devices as well? It is currently 'none'. >> >> /sys/devices/virtual/block/md0/queue/scheduler > > For what it's worth, my experience has beem that deadline works better for > writes to arrays. In arrays with only a few drives, sometimes markedly > better. Guys, it thought it would be worth it to follow up and let you know that the array eventually did rebuild successfully, and is now fully functional. It took about 4 full days to rebuild the 3.0T 4-disk RAID5 at about 4M/sec, sometimes much slower. Thanks again, Alex -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html