RE: pvmove speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



>But I really have a hunch that it is just a lot of I/O wait time due to
>either metadata maintenance and checkpointing and/or I/O failures, which
>have very long timeouts before failure is recognized and *then*
>alternate block assignment and mapping is done.

One of the original arrays just needs to be rebuilt with more members, there are no errors but I believe you are right about simple I/O wait time.

Going from sdd to sde:

# iostat -d -m -x
Linux 2.6.18-53.1.6.el5 (host)  02/12/2008

Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sdd               0.74     0.00  1.52 42.72     0.11     1.75    86.41     0.50   11.40   5.75  25.43
sde               0.00     0.82  0.28  1.04     0.00     0.11   177.52     0.13   98.71  53.55   7.09

Not very impressive :) Two different SATA II based arrays on an LSI controller, 5% complete in ~7 hours == a week to complete! I ran this command from an ssh session from my workstation (That was clearly a dumb move). Given the robustness of the pvmove command I have gleaned from reading, if the session bales how much time am I likely to lose by restarting? Are the checkpoints frequent?

Thanks!
jlc
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux