On 06/04/2013 10:29 PM, Matthew Nicholson wrote: > So it sees something is holding the lock, Rejects it, > > If i look up that uuid: > > [root at ox60-gstore10 ~]# gluster peer status |grep > 0edce15e-0de2-4496-a520-58c65dbbc7da --context=3 > Number of Peers: 20 > > Hostname: ox60-gstore10 > Uuid: 0edce15e-0de2-4496-a520-58c65dbbc7da > State: Peer in Cluster (Connected) This seems to be the case of a server being a peer of itself. This is not required. The following steps might be of help when performed on ox60-gstore10: a) Take a backup of /var/lib/glusterd. b) Stop glusterd. c) Remove the file with name 0edce15e-0de2-4496-a520-58c65dbbc7da in /var/lib/glusterd/peers/. d) Restart glusterd. At this point in time, ox60-gstore10 should not be seen in the output of "gluster peer status" on ox60-gstore10. It should be seen in the output on other nodes of the cluster. If such a state is reached, all volume operations should proceed further. How did the setup get into such a state? Was a self probe attempted or /var/lib/glusterd cloned from one of its peers? -Vijay