You have replica2 so you can't really take 50% of your cluster down
without turning off quorum (and risking split brain). So detaching the
rebuilding peer is really not an option.
If you had replica3 or an arbiter, you CAN detach or isolate the problem
peer. I've done things like change the Gluster network IP on the 'bad'
peer to help speed up a RAID6 rebuild, that wasn't happy with the
gluster heal process going on at the same time.
Your data will still be available and fully functional on the remaining
peer (though you lost redundancy)
Then once the raid rebuild had caught up, you could return the peer to
the cluster and do a final 'heal'.
-bill
On 10/9/2017 2:32 AM, ML wrote:
Hi everyone,
I've been using gluster for a few month now, on a simple 2 peers
replicated infrastructure, 22Tb each.
One of the peers has been offline last week during 10 hours (raid
resync after a disk crash), and while my gluster server was healing
bricks, I could see some write errors on my gluster clients.
I couldn't find a way to isolate my healing peer, in the documentation
or anywhere else.
Is there a way to avoid that ? Detach the peer while healing ? Some
tunning on the client side maybe ?
I'm using gluster 3.9 on debian 8.
Thank you for your help.
Quentin
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users