Hi all, for the upgrade I followed this procedure:
on each server every time I ran 'gluster --version' to confirm the upgrade, at the end I ran 'gluster volume set all cluster.op-version 30800'. Today I've tried to manually kill a brick process on a non critical volume, after that into the log I see: [2017-06-29 07:03:50.074388] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfsd: Started running /usr/sbin/glusterfsd version 3.8.12 (args: /usr/sbin/glusterfsd -s virtnode-0-1-gluster --volfile-id iso-images-repo.virtnode-0-1-gluster.data-glusterfs-brick1b-iso-images-repo -p /var/lib/glusterd/vols/iso-images-repo/run/virtnode-0-1-gluster-data-glusterfs-brick1b-iso-images-repo.pid -S /var/run/gluster/c779852c21e2a91eaabbdda3b9127262.socket --brick-name /data/glusterfs/brick1b/iso-images-repo -l /var/log/glusterfs/bricks/data-glusterfs-brick1b-iso-images-repo.log --xlator-option *-posix.glusterd-uuid=e93ebee7-5d95-4100-a9df-4a3e60134b73 --brick-port 49163 --xlator-option iso-images-repo-server.listen-port=49163) I've checked after the restart and indeed now the directory 'entry-changes' is created, but why stopping the glusterd service has not stopped also the brick processes? Now how can I recover from this issue? Restarting all brick processes is enough?
Greetings, Paolo Margara Il 28/06/2017 18:41, Pranith Kumar
Karampuri ha scritto:
-- LABINF - HPC@POLITO DAUIN - Politecnico di Torino Corso Castelfidardo, 34D - 10129 Torino (TO) phone: +39 011 090 7051 site: http://www.labinf.polito.it/ site: http://hpc.polito.it/ |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users