D Canfield wrote: > I'm trying to build my first GFS cluster (2-node on a SAN) on RHEL4, and > I can get things up and running manually, but I'm having some trouble > getting the process to automate smoothly. > > The first issue is that after I install the lvm2-cluster RPM, I can no > longer boot the machine cleanly because my /var/log partition is on a > separate LVM VolumeGroup (It's still a standard ext3 partition, I just > keep all my logs on a RAID10 array in a different area of the SAN for > performance) and the presence of clvm library seems to prevent vgchange > from running at boot time since clvmd isn't yet running. This part I'm > assuming I'm just missing something obvious, but I have no idea what. You need to mark cluster VGs as clustered (vgchange -cy) and non-clustered VGs as non-clustered (vgchange -cn). You can't have non-clustered LVs in a clustered VG (though it doesn't look like you're doing that). The activation for local VGs should then have the --ignorelockingfailure flag passed to the LVM commands, which should also only be activating the local VG) so it will carry on even if the cluster locking attempt fails. > The second issue is that GFS doesn't seem to allow an automatic way to > actually mount the GFS partitions once clvmd is started. This is a bit > of an issue since the partition I am going to want to mount in most > cases is /home, and even if I put a mount line in /etc/rc.local, that > means services like imap (this cluster) or samba (on the next one) will > be up and trying to serve items out of the home directories before the > directories exist. > > Sorry if I'm being brain dead on this, the fact that I couldn't any > reference to it anywhere else suggests I probably am. Can anyone offer > any hints? > -- patrick -- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster