Replica 3 cluster, file being healed on all 3 nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have been testing failure mode by killing the gluster processes on a node (killall glusterd glusterfsd).

Pleasantly surprised how much smoother gluster has got at this since 3.5, heal takes a while but it no longer kills cluster performance. Probably helps that I'm using replica 3 now rather than replica 2.

However I managed to create a state where a file was being healed on all three nodes (probably y live migrating a VM while it was being healed). I didn't think that was possible without creating a split brain problem, but it eventually got all the way to being healed.


Is this the diff algorithm in play? is it comparing blocks on all three nodes using the ones that are identical across two or three nodes? If it encountered a block that was different on all three ndes would that result in split brain?



--
Lindsay
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux