Re: clvmd does not create vgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thanks for your reply. pvscan finds the new created pv but vgscan can't find any new volume group on node2. Locking type is set to 3 on both nodes. When running in debug mode with "clvmd -d" I get this messages

CLVMD[7b44a820]: Mar  8 15:31:46 add_to_lvmqueue: cmd=0x1b7fe2a0. client=0x66eb20, msg=0x7fff2b3c709c, len=31, csid=0x7fff2b3c6fe4, xid=0
CLVMD[42acf940]: Mar  8 15:31:46 process_work_item: remote
CLVMD[42acf940]: Mar  8 15:31:46 process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 164 on node 10.33.231.98
CLVMD[42acf940]: Mar  8 15:31:46 Dropping metadata for VG #orphans
CLVMD[42acf940]: Mar  8 15:31:46 LVM thread waiting for work
CLVMD[7b44a820]: Mar  8 15:31:46 add_to_lvmqueue: cmd=0x1b7fe2a0. client=0x66eb20, msg=0x7fff2b3c709c, len=38, csid=0x7fff2b3c6fe4, xid=0
CLVMD[42acf940]: Mar  8 15:31:46 process_work_item: remote
CLVMD[42acf940]: Mar  8 15:31:46 process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 165 on node 10.33.231.98
CLVMD[42acf940]: Mar  8 15:31:46 Dropping metadata for VG storage_cluster
CLVMD[42acf940]: Mar  8 15:31:46 LVM thread waiting for work
CLVMD[7b44a820]: Mar  8 15:31:46 add_to_lvmqueue: cmd=0x1b7fe2a0. client=0x66eb20, msg=0x7fff2b3c709c, len=31, csid=0x7fff2b3c6fe4, xid=0
CLVMD[42acf940]: Mar  8 15:31:46 process_work_item: remote
CLVMD[42acf940]: Mar  8 15:31:46 process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 166 on node 10.33.231.98
CLVMD[42acf940]: Mar  8 15:31:46 Dropping metadata for VG #orphans
CLVMD[42acf940]: Mar  8 15:31:46 LVM thread waiting for work
CLVMD[7b44a820]: Mar  8 15:31:46 add_to_lvmqueue: cmd=0x1b7fe2a0. client=0x66eb20, msg=0x7fff2b3c709c, len=36, csid=0x7fff2b3c6fe4, xid=0
CLVMD[42acf940]: Mar  8 15:31:46 process_work_item: remote
CLVMD[42acf940]: Mar  8 15:31:46 process_remote_command VG_BACKUP (0x2b) for clientid 0x5000000 XID 168 on node 10.33.231.98
CLVMD[42acf940]: Mar  8 15:31:46 Triggering backup of VG metadata for storage_cluster. suspended=0
  Error backing up metadata, can't find VG for group storage_cluster
CLVMD[42acf940]: Mar  8 15:31:46 LVM thread waiting for work

Best regards,

Michael



----- Ursprüngliche Mail ----
Von: brem belguebli <brem.belguebli@xxxxxxxxx>
An: linux clustering <linux-cluster@xxxxxxxxxx>
Gesendet: Montag, den 8. März 2010, 13:49:44 Uhr
Betreff: Re:  clvmd does not create vgs

have you tried to pvscan/vgscan on node2 after creation on node1 ?


2010/3/8 Michael <st0rm2oo3@xxxxxxxx>:
> no suggestion?
>
> Thanks
>
>
>
> ----- Ursprüngliche Mail ----
> Von: Michael <st0rm2oo3@xxxxxxxx>
> An: linux-cluster@xxxxxxxxxx
> Gesendet: Donnerstag, den 4. März 2010, 11:12:51 Uhr
> Betreff:  clvmd does not create vgs
>
> Hi,
>
> we are curenty using cman, clvmd and drbd. The problem occurs when we want to create volume groups. DRBD shows that both nodes are UpToDate. So first I try create a Physical Volume:
>
> [root@node1 log]# pvcreate /dev/drbd/by-res/repdata
>  Physical volume "/dev/drbd/by-res/repdata" successfully created
>
> [root@node2 tmp]# pvs
>  PV                      VG         Fmt  Attr PSize   PFree
>  /dev/VolGroup00/storage            lvm2 --     1,14T 1,14T
>  /dev/sda2               VolGroup00 lvm2 a-   557,62G    0
>  /dev/sdb1               VolGroup00 lvm2 a-     1,09T    0
>
> Well, /dev/VolGroup00/storage created on both nodes. Fine. Next I want to create a Volume Group:
>
> [root@node1 log]# vgcreate storage_cluster /dev/drbd/by-res/repdata
> vgcreate storage_cluster /dev/drbd0
>  Clustered volume group "storage_cluster" successfully created
>
> [root@node1 log]# vgs
>  VG              #PV #LV #SN Attr   VSize VFree
>  VolGroup00        2   3   0 wz--n- 1,63T    0
>  storage_cluster   1   0   0 wz--nc 1,14T 1,14T
>
>
> [root@node2 tmp]# vgs
>  VG         #PV #LV #SN Attr   VSize VFree
>  VolGroup00   2   3   0 wz--n- 1,63T    0
>
> Hmmm... there is no vg on node2. I restart the whole server but still no vg visible. What could be wrong?
>
> We're using Centos 5.4 with all patches installed.
>
> cman-2.0.115-1.el5_4.9
> lvm2-2.02.46-8.el5_4.2
> lvm2-cluster-2.02.46-8.el5_4.1
>
> Thanks in advanced and best regards,
>
> Michael
>
> __________________________________________________
> Do You Yahoo!?
> Sie sind Spam leid? Yahoo! Mail verfügt über einen herausragenden Schutz gegen Massenmails.
> http://mail.yahoo.com
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> __________________________________________________
> Do You Yahoo!?
> Sie sind Spam leid? Yahoo! Mail verfügt über einen herausragenden Schutz gegen Massenmails.
> http://mail.yahoo.com
>
>
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


__________________________________________________
Do You Yahoo!?
Sie sind Spam leid? Yahoo! Mail verfügt über einen herausragenden Schutz gegen Massenmails. 
http://mail.yahoo.com 


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux