Re: problems with clvmd and lvms on rhel6.1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/10/2012 11:07 AM, Poós Krisztián wrote:
Dear all,

I hope that anyone run into this problem in the past, so maybe can help
me resolving this issue.

There is a 2 node rhel cluster with quorum also.
There are clustered lvms, where the -c- flag is on.
If I start clvmd all the clustered lvms became online.

After this if I start rgmanager, it deactivates all the volumes, and not
able to activate them anymore as there are no such devices anymore
during the startup of the service, so after this, the service fails.
All lvs remain without the active flag.

I can manually bring it up, but only if after clvmd is started, I set
the lvms manually offline by the lvchange -an <lv>
After this, when I start rgmanager, it can take it online without
problems. However I think, this action should be done by the rgmanager
itself. All the logs is full with the next:
rgmanager Making resilient: lvchange -an ....
rgmanager lv_exec_resilient failed
rgmanager lv_activate_resilient stop failed on ....

As well, sometimes the lvs/clvmd commands are also hanging. I have to
restart clvmd to make it work again. (sometimes killing it)

Anyone has any idea, what to check?

Thanks and regards,
Krisztian

Please paste your cluster.conf file with minimal edits.

--
Digimer
Papers and Projects: https://alteeve.com

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux