Fedele Stabile wrote: > Dear readers, > > i have this equipment: > - 33 PC for student use > - 2 servers and 1 SAN connected via SCSI to the servers > only SAN and servers are powered by UPS > > The whole equipment is in a private network, only servers are also > connected a public net. > > I would export disks on SAN as a common filesystem for all computers. > > I'm thinking it would be a good idea to use the cluster suite. > > What is your opinion? > > I installed CentOS+ClusterSuite and configured cluster to have > sufficient votes for servers (30 votes each) > to avoid quorum problems in case of occasional student-PC switching-off. > SAN disks were gfs filesystems and exported via GNBD > > Cluster is working fine even if i power-off some of student-PC, > but if i power-off all 33 student-PC the two servers go in hang, > I don't know why this should be, off-hand, but I would investigate fencing first off. It might be that there is a huge rush to fence all the nodes together. Or are you saying that the cluster doesn't keep quorum though it should ? Some more information would help helpful here. such as syslog outputs. As Bob mentions, it might be best to have the student machines access the files over NFS rather than GFS for other reasons. > remember that > every student-PS has 1 vote > every server has 30 votes > (i can verify with cman_tool nodes) > > so i have (as i can see with cman_tool status): > > Nodes: 35 > Total_votes: 93 > Quorum: 47 > > also if all nodes are booting simultaneously they hang waiting > indefinitely for the clvmd start Annoyingly this doesn't surprise me. We've had a few reports of cluster with >32 nodes behaving oddly when they are all started up together. Unfortunately we haven't been able to reproduce this in our labs, so the current advice is "don't do that"! -- patrick -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster