Re: lvm2-cluster not syncing correctly?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please read the warning at the end of this email
________________________________________________

> Hi,

> Are you sure the clustered bit is set on the VG?
> http://sources.redhat.com/cluster/wiki/FAQ/CLVM#clvmd_clustered

> Bob Peterson
> Red Hat File Systems

Yes, the volume group was created with vgcreate -cy, and the output of vgs shows the "c" flag: -

  VG            #PV #LV #SN Attr   VSize   VFree
  vg00            1  13   0 wz--n-  49.88G 27.16G
  vgGWPOCSHARED   1   8   0 wz--nc 199.98G 29.98G

You can see below an example using vgs and lvs to show the issue: -

Initial state: -

[root@ybsxlx89 ~]# vgs ; lvs
  VG            #PV #LV #SN Attr   VSize   VFree
  vg00            1  13   0 wz--n-  49.88G 27.16G
  vgGWPOCSHARED   1  10   0 wz--nc 199.98G 27.98G
  LV              VG            Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  esmlv           vg00          -wi-ao 480.00M
  lvol1           vg00          -wi-ao   1.00G
  lvol2           vg00          -wi-ao   4.00G
  lvol3           vg00          -wi-ao   3.91G
  lvol4           vg00          -wi-ao   1.00G
  lvol5           vg00          -wi-ao   1.00G
  lvol6           vg00          -wi-ao 256.00M
  netbackuplv     vg00          -wi-ao 512.00M
  oraclelv        vg00          -wi-ao   5.00G
  tivolilv        vg00          -wi-ao  64.00M
  u001lv          vg00          -wi-ao   5.00G
  u003lv          vg00          -wi-ao 512.00M
  ybslv           vg00          -wi-ao  32.00M
  aserver         vgGWPOCSHARED -wi-a-  10.00G
  fmw1            vgGWPOCSHARED -wi-a-  50.00G
  fmw2            vgGWPOCSHARED -wi-a-  50.00G
  gwpoc_cluster   vgGWPOCSHARED -wi-a-  20.00G
  gwpoc_instance1 vgGWPOCSHARED -wi-a-  10.00G
  gwpoc_instance2 vgGWPOCSHARED -wi-a-  10.00G
  mserver1        vgGWPOCSHARED -wi-a-  10.00G
  mserver2        vgGWPOCSHARED -wi-a-  10.00G
  test            vgGWPOCSHARED -wi-a-   1.00G
  test2           vgGWPOCSHARED -wi-a-   1.00G


I next run "lvcreate -n test3 -L 1G vgGWPOCSHARED" on the other node, and then run the vgs and lvs command on this node again: -


[root@ybsxlx89 ~]# vgs ; lvs
  VG            #PV #LV #SN Attr   VSize   VFree
  vg00            1  13   0 wz--n-  49.88G 27.16G
  vgGWPOCSHARED   1  10   0 wz--nc 199.98G 27.98G
  LV              VG            Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  esmlv           vg00          -wi-ao 480.00M
  lvol1           vg00          -wi-ao   1.00G
  lvol2           vg00          -wi-ao   4.00G
  lvol3           vg00          -wi-ao   3.91G
  lvol4           vg00          -wi-ao   1.00G
  lvol5           vg00          -wi-ao   1.00G
  lvol6           vg00          -wi-ao 256.00M
  netbackuplv     vg00          -wi-ao 512.00M
  oraclelv        vg00          -wi-ao   5.00G
  tivolilv        vg00          -wi-ao  64.00M
  u001lv          vg00          -wi-ao   5.00G
  u003lv          vg00          -wi-ao 512.00M
  ybslv           vg00          -wi-ao  32.00M
  aserver         vgGWPOCSHARED -wi-a-  10.00G
  fmw1            vgGWPOCSHARED -wi-a-  50.00G
  fmw2            vgGWPOCSHARED -wi-a-  50.00G
  gwpoc_cluster   vgGWPOCSHARED -wi-a-  20.00G
  gwpoc_instance1 vgGWPOCSHARED -wi-a-  10.00G
  gwpoc_instance2 vgGWPOCSHARED -wi-a-  10.00G
  mserver1        vgGWPOCSHARED -wi-a-  10.00G
  mserver2        vgGWPOCSHARED -wi-a-  10.00G
  test            vgGWPOCSHARED -wi-a-   1.00G
  test2           vgGWPOCSHARED -wi-a-   1.00G

test3 has not appeared, despite being visible in the other node on which it was created.  So I stop and start clvmd (no gfs2 filesystems mounted): -

[root@ybsxlx89 ~]# service clvmd stop
Deactivating clustered VG(s):   0 logical volume(s) in volume group "vgGWPOCSHARED" now active
                                                           [  OK  ]
Signaling clvmd to exit                                    [  OK  ]
clvmd terminated                                           [  OK  ]
[root@ybsxlx89 ~]# service clvmd start
Starting clvmd:
Activating VG(s):   11 logical volume(s) in volume group "vgGWPOCSHARED" now active
  13 logical volume(s) in volume group "vg00" now active
                                                           [  OK  ]

Now the test3 LV appears: -

[root@ybsxlx89 ~]# vgs ; lvs
  VG            #PV #LV #SN Attr   VSize   VFree
  vg00            1  13   0 wz--n-  49.88G 27.16G
  vgGWPOCSHARED   1  11   0 wz--nc 199.98G 26.98G
  LV              VG            Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  esmlv           vg00          -wi-ao 480.00M
  lvol1           vg00          -wi-ao   1.00G
  lvol2           vg00          -wi-ao   4.00G
  lvol3           vg00          -wi-ao   3.91G
  lvol4           vg00          -wi-ao   1.00G
  lvol5           vg00          -wi-ao   1.00G
  lvol6           vg00          -wi-ao 256.00M
  netbackuplv     vg00          -wi-ao 512.00M
  oraclelv        vg00          -wi-ao   5.00G
  tivolilv        vg00          -wi-ao  64.00M
  u001lv          vg00          -wi-ao   5.00G
  u003lv          vg00          -wi-ao 512.00M
  ybslv           vg00          -wi-ao  32.00M
  aserver         vgGWPOCSHARED -wi-a-  10.00G
  fmw1            vgGWPOCSHARED -wi-a-  50.00G
  fmw2            vgGWPOCSHARED -wi-a-  50.00G
  gwpoc_cluster   vgGWPOCSHARED -wi-a-  20.00G
  gwpoc_instance1 vgGWPOCSHARED -wi-a-  10.00G
  gwpoc_instance2 vgGWPOCSHARED -wi-a-  10.00G
  mserver1        vgGWPOCSHARED -wi-a-  10.00G
  mserver2        vgGWPOCSHARED -wi-a-  10.00G
  test            vgGWPOCSHARED -wi-a-   1.00G
  test2           vgGWPOCSHARED -wi-a-   1.00G
  test3           vgGWPOCSHARED -wi-a-   1.00G

It is interesting that if I do an lvcreate on one node, then ls -l /dev/mapper/vgGWPOCSHARED* on the other node, the device mapper entry has been created, and I can therefore use it in filesystems etc.  It seems restricted to the lvm specific commands.


Simon


________________________________________________
This email and any attachments are confidential and may contain privileged information.
If you are not the person for whom they are intended please return the email and then delete all material from any computer. You must not use the email or attachments for any purpose, nor disclose its contents to anyone other than the intended recipient.
Any statements made by an individual in this email do not necessarily reflect the views of the Yorkshire Building Society Group.
________________________________________________

Yorkshire Building Society, which is authorised and regulated by the Financial Services Authority, chooses to introduce its customers to Legal & General for the purposes of advising on and arranging life assurance and investment products bearing Legal & Generalâs name.

We are entered in the FSA Register and our FSA registration number is 106085 http://www.fsa.gov.uk/register

Head Office: Yorkshire Building Society, Yorkshire House, Yorkshire Drive, Bradford, BD5 8LJ
Tel: 0845 1 200 100

Visit Our Website
http://www.ybs.co.uk

All communications with us may be monitored/recorded to improve the quality of our service and for your protection and security.



________________________________________________________________________
This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk
________________________________________________________________________

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux