http://sources.redhat.com/cluster/ has a link to http://sources.redhat.com/ml/cluster-cvs/ which gives 404. On the same page, there is also a link to usage.txt from CVS, which references ftp://sources.redhat.com/pub/cluster/, and that directory also doesn't seem to exist. It would be easier for people to test this stuff if there were pre-built kernel RPMs for their favourite distribution. I can look into doing this for Fedora Core 2 if noone else is doing that yet. It is generally unclear to me how CLVM works, and what kind of shared storage gfs needs. I found links in many places to the GFS HOWTO at http://www.sistina.com/gfs/Pages/howto.html, but that just redirects me to RH's GFS sales pitch. There are various references to sistina still in the tree: ./ccs/daemon/ccsd.c:#define DEFAULT_CCSD_LOCKFILE "/var/run/sistina/ccsd.pid" ./ccs/daemon/ccsd.c: if(!strncmp(lockfile, "/var/run/sistina/", 17)){ ./ccs/daemon/ccsd.c: if(stat("/var/run/sistina", &stat_buf)){ ./ccs/daemon/ccsd.c: if(mkdir("/var/run/sistina", S_IRWXU)){ ./ccs/daemon/ccsd.c: log_err("/var/run/sistina is not a directory.\n" ./cman/tests/qwait.c: (c) 2002 Sistina Software Inc. ./fence/agents/baytech/Makefile: ${top_srcdir}/scripts/define2var ${top_srcdir}/config/copyright.cf perl SISTINA_COPYRIGHT >> $(TARGET) ./fence/agents/baytech/fence_baytech.pl:$SISTINA_COPYRIGHT=""; ./fence/agents/baytech/fence_baytech.pl: print "$SISTINA_COPYRIGHT\n" if ( $SISTINA_COPYRIGHT ); ./gfs/man/gfs_grow.8:'\" Steven Whitehouse <steve@xxxxxxxxxxx> ./gfs/man/gfs_jadd.8:'\" Steven Whitehouse <steve@xxxxxxxxxxx> ./gulm/man/lock_gulmd.8:\fB/var/run/sistina/lock_gulmd_core.pid\fP ./gulm/man/lock_gulmd.8:\fB/var/run/sistina/lock_gulmd_LTPX.pid\fP ./gulm/man/lock_gulmd.8:\fB/var/run/sistina/lock_gulmd_LT000.pid\fP ./gulm/man/lock_gulmd.8:\fBlock_gulmd\fP does not create the \fIsistina\fR directory in the ./gulm/src/config_ccs.c: "/var/run/sistina") ); ./gulm/src/config_main.c: gf->lock_file = strdup("/var/run/sistina"); I'm happy to see a generic cluster manager in your package (looking into it now.) I really hope there will eventually be a standard cluster framework out there that everybody will use.