Re: Issue with clvmd - Is it really bug??

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Theophanis Kontogiannis wrote:
Hello,

I have a 2 node cluster at home with CentOS 5 running on 64bit AMDx2 with DRBD

2.6.18-92.1.6.el5.centos.plus

drbd82-8.2.6-1.el5.centos

lvm2-2.02.32-4.el5

lvm2-cluster-2.02.32-4.el5

system-config-lvm-1.1.3-2.0.el5

I do not know if my problem is directly related to http://kbase.redhat.com/faq/FAQ_51_10471.shtm and https://bugzilla.redhat.com/show_bug.cgi?id=138396

I do:

pvcreate --metadatacopies 2 /dev/drbd0 /dev/drbd1

vgcreate -v vg0 -c y /dev/drbd0 /dev/drbd1

lvcreate -v -L 348G -n data0 vg0

Then I reboot.

The LV never becomes available.

If I try

vgchange -a y

I get

Error locking on node tweety-1: Volume group for uuid not found: 7Z9ra5zee3ZK7pNpfsblvtMOWXhgkZVEiJrzRQshaaiN5JKtJtkPDkQWfFXYKVVa

  0 logical volume(s) in volume group "vg0" now active

If I do

clvmd –R

Then with

vgchange –a y vg0.

the LV becomes available.

Is this really related to the above mentioned bug?

How can I make the LV become available during boot up without any intervention?

Thank you all for your time,

As you're using drbd for the PV, I think it might be to do startup ordering. If drbd is started AFTER clvmd then it won't see the devices, and you'll get exactly the symptoms you describe.

if you can, move drbd to before clvmd, or clvmd after drbd. Or, failing that, put the extra commands you used above into their own startup script.
--

Chrissie

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux