On 12/29/2010 03:40 AM, Spelic wrote:
Hello list
pvmove is painfully slow if the destination is on a 6-disks MD raid-5,
it performs at 200-500Kbytes/sec! (kernel 2.6.36.2)
Same for lvconvert add mirror.
Instead, if the destination is on a 4 devices MD raid10near, it
performs at 60MBytes/sec which is much more reasonable. (this is a
120-fold difference at least!)
Same for lvconvert add mirror.
Sorry, yesterday I made a few mistakes computing the speeds.
Here are the times for moving a 200MB logical volume towards various
types of MD arrays (either pvmove or lvconvert add mirror: doesn't
change much)
It's the destination array that matters, not the source array.
raid5, 8 devices, 1024k chunk: 36 seconds (5.5MB/sec)
raid5, 6 device, 4096k chunk: 2m18sec ?!?! (1.44 MB/sec!?)
raid5, 5 devices, 1024k chunk: 25sec (8MB/sec)
raid5, 4 devices, 16384k chunk: 41sec (4.9MB/sec)
raid10, 4 devices, 1024k chunk, near-copies: 5 sec! (40MB/sec)
raid1, 2 devices: 3.4sec! (59MB/sec)
raid1, 2 devices (another, identical to the above): 3.4sec! (59MB/sec)
I tried multiple times for every device with consistent results, so I'm
pretty sure these are actual numbers.
What's happening?
Apart from the amazing difference of parity raid vs nonparity raid, with
parity raid it seems to vary randomly with the number of devices and the
chunksize..?
I tried various --regionsize settings for lvconvert add mirror but the
times didn't change much.
I even tried to set my SATA controller to ignore-FUA mode (it fakes the
FUA, returns immediately) => no change.
Thanks for any info
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/