On Tue, Jan 26, 2010 at 3:29 PM, Terry <td3201@xxxxxxxxx> wrote: > I have a two node cluster that was working fine but now one of my > nodes is not able to see all of my clustered volumes (clvmd): > > [root@omadvnfs01a ~]# vgscan > Reading all physical volumes. This may take a while... > Skipping clustered volume group vg_data01h > Skipping clustered volume group vg_data01e > Found volume group "VolGroup02" using metadata type lvm2 > Skipping clustered volume group vg_data01b > Skipping clustered volume group vg_data01d > Skipping clustered volume group vg_data01a > Skipping clustered volume group vg_data01c > Skipping clustered volume group vg_data01i > Found volume group "VolGroup00" using metadata type lvm2 > > > The other node is fine: > [root@omadvnfs01b ~]# vgscan > Reading all physical volumes. This may take a while... > Found volume group "vg_data01h" using metadata type lvm2 > Found volume group "vg_data01e" using metadata type lvm2 > Found volume group "vg_data01d" using metadata type lvm2 > Found volume group "vg_data01b" using metadata type lvm2 > Found volume group "vg_data01a" using metadata type lvm2 > Found volume group "vg_data01c" using metadata type lvm2 > Found volume group "VolGroup02" using metadata type lvm2 > Found volume group "vg_data01i" using metadata type lvm2 > Found volume group "VolGroup00" using metadata type lvm2 > > > I am not sure how to troubleshoot this. I see clvmd running on both > nodes. The cluster appears to be fine other than this. Of course I > have tried restarting the entire cluster. Any thoughts? > Well, I fail, epically. Apparently locking_type got set to 1 magically. I can't imagine a patch would have done it but that was the reason things weren't working for me. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster