Having "expanding volume corruption" issue fixed only in 3.13 brunch you better off recreating the thing use the trick mentioned here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html kill volume, reset attributes, delete .glusterfs, add new and run stat seems that whoever wrote heal did a good job On Wed, Jan 24, 2018 at 8:50 AM, Hoggins! <fuckspam@xxxxxxxxxxx> wrote: > Hello, > > The subject says it all. I have a replica 3 cluster : > > gluster> volume info thedude > > Volume Name: thedude > Type: Replicate > Volume ID: bc68dfd3-94e2-4126-b04d-77b51ec6f27e > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: ngluster-1.network.hoggins.fr:/export/brick/thedude > Brick2: ngluster-2.network.hoggins.fr:/export/brick/thedude > Brick3: ngluster-3.network.hoggins.fr:/export/brick/thedude > Options Reconfigured: > cluster.server-quorum-type: server > transport.address-family: inet > nfs.disable: on > performance.readdir-ahead: on > client.event-threads: 8 > server.event-threads: 15 > > > ... and I would like to replace, say ngluster-2 with an arbiter-only > node, without any data. Is that possible ? How ? > > Thanks ! > > Hoggins! > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://lists.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users