Hi, > Is this possible? I would think that if a node has properly/cleanly left > the cluster, locks that were held by that node would be released. Is there > a way to display locks that may be still existing for that node that is > down? And lastly, is there a way to force the release of those locks with > out the reboot of the cluster? I've been searching the linux-cluster > archives with little success. The best thing is to fix the initial problem, but as a workaround you may try to fence_node from some of the other machines in the cluster even it has left cleanly - this should cleanup the locks held from that node about seeing the locks you may use "gfs(2)_tool lockdump <mount_point>" or via debugfs by mounting it somewhere -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster