pvmove questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good morning,

 

                I’ve been trying to hunt down some answers to a few questions about pvmove to no avail. If I can get answers here I’ll volunteer to update the wiki with them.

 

                A little detail first.

 

                I’m trying to move a volume used for user email from one backend system to another. The volume is 400GB in size. It is currently on a 9 spindle nSeries/NetApp filer shared out as a lun over fibre channel. I am trying to move the extents to our new SAN node, an EVA4400 with a 8 disk 300GB 15k DP FC disk group also shared as an FC lun. Both disk systems have no contention for IO’s at the moment.

                The server is accessing the system via a 2Gbit Qlogic FC HBA over a Qlogic SANBox 5602 switch stack.

                There is plenty of spare overhead in the infrastructure at the moment.

 

                Server is running RHEL 4 update 7, running dual 3.2Ghz Xeons with 2GB of memory. No swap in use, plenty of free memory available. Iostat shows my disk system is idle a lot. I’ll include a snippet down the bottom.

 

                The 400GB volume is broken into 3200 extents. It is taking 23 minutes on average to move 5 extents. This means my completion date is ~10.22 days away if I let the process run untouched.

 

                I was originally going to let it run and then abort it during business hours so that performance was not impacted for users. This would let me roll back to the previous checkpoint. From the man page:

 

       5.  A  daemon  repeatedly  checks progress at the specified time interval.  When it detects that the first

       temporary mirror is in-sync, it breaks that mirror so that only the new location for that data  gets  used

       and writes a checkpoint into the volume group metadata on disk.  Then it activates the mirror for the next

       segment of the pvmove LV.

 

                Turns out that after 12 hours and some 10% progress, no checkpoints had been issued. My pvmove abort rolled back the entire 10% progress so far.

 

                I now have progress underway via a script that calls pvmove repeatedly moving 5 extents at a time. This has given me a pausable solution.

 

                So, things I am now thoroughly confused about:

 

1.       Checkpoint intervals. How often do they occur? Are they configurable? Can I see when one is set?

2.       These “specified time intervals” that the daemon checks at appear to be different from the –i option to pvmove. Is there someway to specify the interval? Or work out what it is?

3.       Is there anyway to make pvmove go faster?  It is using nowhere near the capability of the IO subsystem. I have no idea what is bottlenecking the process to be honest.

 

Any help would be deeply appreciated, I am happy to submit more information upon request.

 

Thanks,

Dave

 

--

David Nillesen

UNIX Systems Administrator

University of New England

+61 2 6773 2112



 

 

Moving extents from sda to sdd:

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             159.23       134.75       291.02 1943548784 4197564444

sdd               0.09         0.29         7.10    4136156  102479262

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

          17.25    0.00    4.75   49.00   29.00

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             206.93      4479.21         1.98       4524          2

sdd               0.00         0.00         0.00          0          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           3.24    0.00    2.49   75.31   18.95

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             138.38      3793.94         0.00       3756          0

sdd              11.11        18.18         0.00         18          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           2.25    0.00    1.75   58.75   37.25

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             200.00      3214.14         0.00       3182          0

sdd              17.17        50.51         0.00         50          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           8.98    0.00    6.23   64.09   20.70

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             281.00      6126.00         0.00       6126          0

sdd               0.00        76.00         0.00         76          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           3.50    0.00    4.00   66.75   25.75

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             363.00      3200.00         0.00       3200          0

sdd             336.00       892.00         0.00        892          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

          24.25    0.00   10.25   52.75   12.75

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             432.67      3027.72         0.00       3058          0

sdd               8.91        61.39         0.00         62          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           2.49    0.00    2.74   65.84   28.93

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             267.33      2948.51       221.78       2978        224

sdd              80.20       144.55         0.00        146          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           5.50    0.00    7.25   49.25   38.00

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             249.49      8167.68         2.02       8086          2

sdd             115.15       282.83      5688.89        280       5632

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

          12.69    0.00   17.41   44.78   25.12

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             492.08      3356.44      7287.13       3390       7360

sdd               0.00        15.84         0.00         16          0

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

          11.00    0.00   12.00   64.25   12.75

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             342.00      7464.00      3206.00       7464       3206

sdd              62.00        84.00      5120.00         84       5120

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

          11.75    0.00   11.50   73.25    3.50

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             458.00      4112.00      1888.00       4112       1888

sdd               7.00        34.00      2048.00         34       2048

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           7.25    0.00    6.00   66.50   20.25

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             317.00     11898.00      1364.00      11898       1364

sdd              92.00       162.00      1600.00        162       1600

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

          13.72    0.00    7.48   72.32    6.48

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             307.07     13167.68       436.36      13036        432

sdd             290.91      1022.22     10860.61       1012      10752

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           4.25    0.00    5.50   79.00   11.25

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             453.47      2904.95      6429.70       2934       6494

sdd               0.00        71.29        27.72         72         28

 

avg-cpu:  %user   %nice    %sys %iowait   %idle

           5.47    0.00    3.48   90.80    0.25

 

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn

sda             253.54      2367.68      1892.93       2344       1874

sdd              83.84       119.19         0.00        118          0

 

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux