Re: pvmove with a clustered VG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Le 25 sept. 08 à 19:19, Jeremy Lyon a écrit :

I wasn't sure which list to, so I chose both cluster and lvm.

My current configuration:
2 Node RHEL 5.2 cluster with multiple GFS on top of logical volumes in one volume group.

# rpm -q cman lvm2 lvm2-cluster kmod-gfs
cman-2.0.84-2.el5
lvm2-2.02.32-4.el5
lvm2-cluster-2.02.32-4.el5
kmod-gfs-0.1.23-5.el5

I need to move a PV out of this volume group so I attempted to run pvmove /dev/sdk1 but errors about locking on the other node.  I assumed this was because of the multiple GFS file systems being used on both nodes (services were spread across the nodes).  So I relocated all services to one node and even stopped rgmanager, gfs, clvmd and cman on the idle node to make sure that no locks would remain open.

I still had issues with running the pvmove.  I saw these messages:

Sep 24 17:56:48 nodea kernel: device-mapper: mirror log: Module for logging t
ype "clustered-core" not found.
Sep 24 17:56:48 nodea kernel: device-mapper: table: 253:31: mirror: Error cre
ating mirror dirty log
Sep 24 17:56:48 nodea kernel: device-mapper: ioctl: error adding target to ta
ble

And after about 14% of the move was completed, there was another locking message and many processes went into an uninteruptable sleep state.  Load on the server shot up to around 80.

I finally had to reboot the node and run pvmove --abort to get everything back to working condition. 

Is it not possible to run pvmove on a clustered VG?  Any help would be appreciated.

-Jeremy

I have also encountered a similar problem under the current RHEL 5.2 with clvmd and pvmoving several lv from one pv to an other :

- on small volumes (8 G), pvmove do its job without problem
- on bigger volumes, pvmove seams to hangs at one point, the only solution being to reboot the node.
- on volumes having several segments, the node hangs at the end of the each segment
- you alway get the warning about creating mirror dirty log, but I have found out a bugzilla entry for this one (you may just ignore it)

Also I have "feature" that is also a solution  :

- start the pvmode, then issue a ctrl-c after getting some output (x % done)
- the pvmode is supposed to stop at this time, but in fact it continues in background !
- the pvmove continue up to the point of the next segment boundary
- if the lv you are moving is multi-segmented, the first segment was successfully moved, but you have to issue a pvmove again (and a ctrl-c) to continue the next segment

I have succeeded in moving about 1To of data from one SAN to an other with that feature !

I think there is both problems here :
- a problem with pvmove output and pvmove background operation. Note that I have to do a pvmove + ctrl-c and that a pvmove -b do not work. This problem is probably a lvm problem as I have found some other user having the same issue under ubuntu (and without cman/clvmd)
- a problem with clvmd locking that is done at the end of each segments : at that time the new pvmove mirror segment is made the current allocated one and the old one is made free, and the other members of the cluster are informed (cvlvmd locking + segment marking + clvmd unlocking). This operation seams to fail with an error about locking error.

Regards,

-- 

Alain RICHARD <mailto:alain.richard@xxxxxxxxxxx>

EQUATION SA <http://www.equation.fr/>

Tel : +33 477 79 48 00     Fax : +33 477 79 48 01

Applications client/serveur, ingénierie réseau et Linux


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux