Tomasz, Glusterd version 3.2.6 doesn't handle concurrently issued volume commands 'gracefully'. It is known to end up in situations like the one you have described below. This was fixed in the early days of what we informally refer to as the 3.3.0. [Ref: https://bugzilla.redhat.com/show_bug.cgi?id=GLUSTER-3320] Having said that, gluster cli's operation/commands semantics permits only one volume command (like create, start, stop etc) run on the cluster (storage pool). Even with the fix for the bug referred above (in master), when two gluster commands are issued in parallel, both of them _may_ fail. The fix ensures that you don't get into any 'breakages'. It is as though the commands 'collided' and both aborted themselves. Hope that helps, krish ----- Original Message ----- From: "Tomasz Chmielewski" <mangoo at wpkg.org> To: "Gluster General Discussion List" <gluster-users at gluster.org> Sent: Friday, May 25, 2012 11:16:09 PM Subject: "gluster volume replace-brick ... status" breaks when executed on multiple nodes I've executed "gluster volume replace-brick ... status" on multiple peers at the same time, which resulted in quite an interesting breakage. It's no longer possible to pause/abort/status/start the replace-brick operation. Please advise. I'm running glusterfs 3.2.6. root at ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 status replace-brick status unknown root at ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 pause replace-brick pause failed root at ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 abort replace-brick abort failed root at ca2:~# gluster volume replace-brick sites ca1-int:/data/glusterfs ca2-int:/data/ca1 start replace-brick failed to start -- Tomasz Chmielewski http://www.ptraveler.com _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users