The lock is an in-memory structure which isn't persisted. Restarting should reset the lock. You could possibly reset the lock by gdbing into the glusterd process. Since this is happening to you consistently, there is something else that is wrong. Could you please give more details on your cluster? And the glusterd logs of the misbehaving peer (if possible for all the peers). It would help in tracking it down. On Tue, Mar 18, 2014 at 12:24 PM, Franco Broi <franco.broi@xxxxxxxxxx> wrote: > > Restarted the glusterd daemons on all 4 servers, still the same. > > It only and always fails on the same server and it always works on the > other servers. > > I had to reboot the server in question this morning, perhaps it's got > itself in a funny state. > > Is the lock something that can be examined? And removed? > > On Tue, 2014-03-18 at 12:08 +0530, Kaushal M wrote: >> This mostly occurs when you run two gluster commands simultaneously. >> Gluster uses a lock on each peer to synchronize commands. Any command >> which would need to do operations on multiple peers, would first >> acquire this lock, and release it after doing the operation. If a >> command cannot acquire a lock because another command had the lock, it >> will fail with the above error message. >> >> It sometimes happens that a command could fail to release the lock on >> some peers. When this happens all further commands which need the lock >> will fail with the same error. In this case your only option is to >> restart glusterd on the peers which have the stale lock held. This >> will not cause any downtime as the brick processes are not affected by >> restarting glusterd. >> >> In your case, since you can run commands on other nodes, most likely >> you are running commands simultaneously or at least running a command >> before an old one finishes. >> >> ~kaushal >> >> On Tue, Mar 18, 2014 at 11:24 AM, Franco Broi <franco.broi@xxxxxxxxxx> wrote: >> > >> > What causes this error? And how do I get rid of it? >> > >> > [root@nas4 ~]# gluster vol status >> > Another transaction could be in progress. Please try again after sometime. >> > >> > >> > Looks normal on any other server. >> > >> > _______________________________________________ >> > Gluster-users mailing list >> > Gluster-users@xxxxxxxxxxx >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users