----- Original Message ----- > Hello everyone, > > I'm a new man on linux cluster. I have built a two-node cluster (without > qdisk), includes: > > Redhat 6.4 > cman > pacemaker > gfs2 > > My cluster could fail-over (back and forth) between two nodes for these 3 > resources: ClusterIP, WebFS (Filesystem GFS2 mount /dev/sdc on > /mnt/gfs2_storage), WebSite ( apache service) > > My problem occurs when I stop/start node in the following order: (when both > nodes started) > > 1. Stop: node1 (shutdown) -> all resource fail-over on node2 -> all resources > still working on node2 > 2. Stop: node2 (stop service: pacemaker then cman) -> all resources stop (of > course) > 3. Start: node1 (start service: cman then pacemaker) -> only ClusterIP > started, WebFS failed, WebSite not started (snip) > I don't have any glues to trace down this case, I just guess this problem > comes from locking file system, please suggest me some advices. Hi, Some thoughts on your problem: (1) If this is truly Redhat 6.4, and you have a support contract with Red Hat, you should call the support number with Global Support Services and file a ticket. They'll be able to help. (2) You didn't explain what your symptoms were? In what way does it fail? (3) Why do you suspect "this problem comes from locking file system"? Do you mean from GFS2? What is the symptom that causes you to think it might be the file system? Were there messages on the console or dmesg to indicate a kernel issue? (4) I thought RHEL6.4 has cman/rgmanager, not pacemaker. Regards, Bob Peterson Red Hat File Systems -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster