Hi, I setup a 4+2 dispersed volume and it worked well so far. gluster volume info Volume Name: disperseVol Type: Disperse Volume ID: 35386b55-829c-4bac-bdba-609427269cf4 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: 192.168.129.227:/mnt/gluster/disperseVol Brick2: 192.168.130.4:/mnt/gluster/disperseVol Brick3: 192.168.130.2:/mnt/gluster/disperseVol Brick4: 192.168.129.2:/mnt/gluster/disperseVol Brick5: 192.168.130.3:/mnt/gluster/disperseVol Brick6: 192.168.129.218:/mnt/gluster/disperseVol Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on features.bitrot: on features.scrub: Active Now two host (.130.4, .130.3) burned down an the two bricks are gone. The volume works still well, but I'm unable to replace the vanished bricks to get back redundancy. I followed the gluster docs, added a new peer and tried: gluster volume replace-brick disperseVol \ 192.168.130.4:/mnt/gluster/disperseVol \ 192.168.130.6:/glusterPool/disperseVol commit force but this gives an error volume replace-brick: failed: Pre Validation failed on 192.168.130.6. \ brick: 192.168.130.4:/mnt/gluster/disperseVol does not exist in volume: disperseVol so I got no idea how to continue (except: shred it all, start from scratch and restore backup, but there must be a better solution). Thanks in advance ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users