Thanks Tom and Joe,
for the fast response! Before I started my upgrade I stopped all clients using the volume and stopped all VM's with VHD on the volume, but I guess, and this may be the missing thing to reproduce this in a lab, I did not detach a NFS shared storage mount from a XenServer pool to this volume, since this is an extremely risky business. I also did not stop the volume. This I guess was a bit stupid, but since I did upgrades in the past this way without any issues I skipped this step (a really bad habit). I'll make amends and file a proper bug report :-). I agree with you Joe, this should never happen, even when someone ignores the advice of stopping the volume. If it would also be nessessary to detach shared storage NFS connections to a volume, than franky, glusterfs is unusable in a private cloud. No one can afford downtime of the whole infrastructure just for a glusterfs upgrade. Ideally a replicated gluster volume should even be able to remain online and used during (at least a minor version) upgrade. I don't know whether a heal was maybe buzzy when I started the upgrade. I forgot to check. I did check the CPU activity on the gluster nodes which were very low (in the 0.0X range via top), so I doubt it. I will add this to the bug report as a suggestion should they not be able to reproduce with an open NFS connection. By the way, is it sufficient to do: service glusterd stop service glusterfsd stop and do a: ps aux | gluster* to see if everything has stopped and kill any leftovers should this be necessary? For the fix, do you agree that if I run e.g.: find /export/* -type f -size 0 -perm 1000 -exec /bin/rm {} \; on every node if /export is the location of all my bricks, also in a replicated set-up, this will be save? No necessary 0bit files will be deleted in e.g. the .glusterfs of every brick? Thanks for your support! Cheers, Olav On 18/02/15 20:51, Joe Julian wrote:
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users