On Tue, 2005-11-15 at 07:50 -0800, Jeff Dinisco wrote: > I'm trying to implement gfs on FC4. I'm following the redhat docs. I > do not have the gui. > > I'm confused about the fencing piece. It implies that fencing is > required to allow a service to run on only a single node in a cluster > which ensures data integrity. Makes sense if your data resides on > ext3 or an equivalent. But it seems to defeat the purpose of GFS. This is actually incorrect. If a node has a lock on GFS metadata and live-hangs long enough for the rest of the cluster to think it is dead, it will wake up and still think it has that lock. If it alters the metadata, thinking it is safe, it will corrupt the file system. Fencing prevents the live-hung node from waking up corrupting the GFS file system. > In fact, I really only want to utilize gfs and don't want my app to be > a service or eligible for failover. I basically want 3 nodes to serve > data from the same filesystem. If an app/service/node crashes or > fails it's fine, I'll still have 2 serving the same data via the same > apps. You need fencing to preserve both metadata and data integrity in GFS clusters. > So my questions are, how should I configure fencing? If you don't care about having to do manual intervention, you can use manual fencing. However, I *strongly* recommend against it. IIRC, there are several example configurations for different power switches in the archive for this mailing list. > Are there aspects of the cluster I should/can leave out of the mix > since I really only want gfs functionality, not ha functionality. > Thanks You do not need rgmanager if you don't need the failover part. GFS and rgmanager talk to the same cluster infrastructure; both need CMAN/DLM/ccsd/fenced. -- Lon -- Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster