Patrick Caulfield wrote:
D Canfield wrote:
I'm trying to build my first GFS cluster (2-node on a SAN) on RHEL4, and
I can get things up and running manually, but I'm having some trouble
getting the process to automate smoothly.
The first issue is that after I install the lvm2-cluster RPM, I can no
longer boot the machine cleanly because my /var/log partition is on a
separate LVM VolumeGroup (It's still a standard ext3 partition, I just
keep all my logs on a RAID10 array in a different area of the SAN for
performance) and the presence of clvm library seems to prevent vgchange
from running at boot time since clvmd isn't yet running. This part I'm
assuming I'm just missing something obvious, but I have no idea what.
You need to mark cluster VGs as clustered (vgchange -cy) and non-clustered VGs
as non-clustered (vgchange -cn). You can't have non-clustered LVs in a
clustered VG (though it doesn't look like you're doing that).
The activation for local VGs should then have the --ignorelockingfailure flag
passed to the LVM commands, which should also only be activating the local VG)
so it will carry on even if the cluster locking attempt fails.
I see that the ignorelockingfailure flag was already in the initscripts
of RHEL4, and a bit more testing got me some different information. If
I have lvm2-cluster installed, the process will error out to the
maintenance shell when it tries to fsck my /var/log partition. If I
look in /dev/mapper VolGroup01 has not been activated (though if I look
higher up in the boot log, vgscan did see it). But from the maintenance
shell, I can go ahead and run vgchange -a y --ignorelockingfailure (just
like the rc.sysinit does 2-3 times by the time it gets to the fsck), and
the VolGroup01 is activated just fine.
If I remove the lvm2-cluster RPM, the machine boots up fine. Also, if I
leave the lvm2-cluster RPM installed but change the mount options from
"defaults 0 2" ro "defaults 0 0", it will skip the fsck, and by the time
the machine is booted, the /var/log partition has indeed been mounted (I
think it gets mounted after clvmd starts).
I've checked that -c n is set on this local volumegroup, but that
doesn't seem to make a difference. I've listed a few outputs below.
Any other thoughts? Thanks much.
# vgdisplay
--- Volume group ---
VG Name VolGroupMailGFS
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
Clustered yes
Shared no
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 341.62 GB
PE Size 16.00 MB
Total PE 21864
Alloc PE / Size 21864 / 341.62 GB
Free PE / Size 0 / 0
VG UUID ehOhtR-cYE8-xjls-Qle0-eT71-DmZO-p5ur6v
--- Volume group ---
VG Name VolGroup01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 4.98 GB
PE Size 16.00 MB
Total PE 319
Alloc PE / Size 318 / 4.97 GB
Free PE / Size 1 / 16.00 MB
VG UUID 3Xuzas-tiX2-DgPG-71JH-dB2O-U1qH-SCdgGD
--- Volume group ---
VG Name VolGroup00
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 7.89 GB
PE Size 16.00 MB
Total PE 505
Alloc PE / Size 504 / 7.88 GB
Free PE / Size 1 / 16.00 MB
VG UUID cYiUzS-QlnZ-PF50-0kAO-kYL0-V3Yw-dXwBIe
# pvdisplay
--- Physical volume ---
PV Name /dev/sdc
VG Name VolGroupMailGFS
PV Size 341.62 GB / not usable 0
Allocatable yes (but full)
PE Size (KByte) 16384
Total PE 21864
Free PE 0
Allocated PE 21864
PV UUID NYWZVb-yKBl-o7dR-Xq9s-0z3A-VFS0-wxzwc1
--- Physical volume ---
PV Name /dev/sdb1
VG Name VolGroup01
PV Size 4.98 GB / not usable 0
Allocatable yes
PE Size (KByte) 16384
Total PE 319
Free PE 1
Allocated PE 318
PV UUID EFIqWw-SvP6-OWGV-u350-mwyx-5lJQ-29ksqz
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 7.89 GB / not usable 0
Allocatable yes
PE Size (KByte) 16384
Total PE 505
Free PE 1
Allocated PE 504
PV UUID qR2QxR-KuPF-Wsvc-w0yv-d7rK-3NlY-wLRREb
# lvdisplay
--- Logical volume ---
LV Name /dev/VolGroupMailGFS/LogVolHome
VG Name VolGroupMailGFS
LV UUID 7bE2Zt-27A2-OHga-qFDI-QnNc-m21r-LUaXEm
LV Write Access read/write
LV Status NOT available
LV Size 341.62 GB
Current LE 21864
Segments 1
Allocation inherit
Read ahead sectors 0
--- Logical volume ---
LV Name /dev/VolGroup01/LogVolLogs
VG Name VolGroup01
LV UUID 01rj7U-809c-jHmg-n6y7-md6Z-yYlF-NYMxCi
LV Write Access read/write
LV Status available
# open 1
LV Size 4.97 GB
Current LE 318
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:2
--- Logical volume ---
LV Name /dev/VolGroup00/LogVolRoot
VG Name VolGroup00
LV UUID YFunW2-SKSz-T6pZ-7Agf-AFvO-W411-bfX3Q1
LV Write Access read/write
LV Status available
# open 1
LV Size 6.88 GB
Current LE 440
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVolSwap
VG Name VolGroup00
LV UUID uvuww5-PzDY-79pc-hxtk-33Rl-L2tI-Kp9IDb
LV Write Access read/write
LV Status available
# open 1
LV Size 1.00 GB
Current LE 64
Segments 1
Allocation inherit
Read ahead sectors 0
Block device 253:1
--
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster