On Thu, 30 Dec 2010, Spelic wrote: > Also there is still the mystery of why times appear *randomly* related to the > number of devices, chunk sizes, and stripe sizes! if the rmw cycle was the > culprit, how come I see: > raid5, 4 devices, 16384k chunk: 41sec (4.9MB/sec) > raid5, 6 device, 4096k chunk: 2m18sec ?!?! (1.44 MB/sec!?) > the first has much larger stripe size of 49152K , the second has 20480K ! Ok, next theory. Pvmove works by allocating a mirror for each contiguous segment of the source LV, update metadata (how many metadata copies do you have?), sync mirror, update metadata and allocate and sync next segment until finished. Pvmove will be fastest when the source LV has a single contiguous chunk. If you restored the metadata after every test, then the variation by dest PV would blow this theory. But if not, then the slow pvmoves would be for fragmented source LVs. The metadata updates between every segment are rather expensive (but necessary). -- Stuart D. Gathman <stuart@bmsi.com> Business Management Systems Inc. Phone: 703 591-0911 Fax: 703 591-6154 "Confutatis maledictis, flammis acribus addictis" - background song for a Microsoft sponsored "Where do you want to go from here?" commercial. _______________________________________________ linux-lvm mailing list linux-lvm@redhat.com https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/