>> You might check that /etc/lvm/lvm.conf has locking_type = 3 as described >> in FAQ #22 here. It defaults to local locking. I'm still using type 2 which was how it was working before I changed the storage. But again, nothing was changed, only the drives. >> Also, I've found that cluster suite components use a variety of IP ports >> for communication, and if you've selected a standard firewall I don't use any firewall's inside the private network so it's not that either. >> configuration you may be blocking them. For testing, I'd try shutting >> down the CS services (service cman stop, etc), then run "iptables -F" on I don't have iptables running on the internal machines, only the load balanced front end machines but this is not that network. >> doing a netstat -na before and after this process and compare the ports Yup, I've posted my findings on this also and it seems that there is something weird there. Dlm complains about the port not being available yet it's open and listening. > also try clustat command > how about disabling selinux? I odn't use selinux on any of the internal stuff but good call there too. > (ducking under the table) have you tried system-config-lvm gui tool > there is also a setting in lvm.conf which allows/disallows certain devices > that can/cannot appear as storage checkout the settings All good ideas, can't thank you enough. I've posted my latest findings. At this point, I took the storage off of the nodes and am looking at it from one node only using single node functions. Even with just the one node, I'm still seeing similar problems. I'm going to re-create the storage and see what happens but I doubt it's going to help. Mike -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster