You may be mis-understanding the way the gluster system works in detail here, but you’ve got the right idea overall. Since gluster is maintaining 3 copies of your data, you can lose a drive or a whole system and things will keep going without interruption (well, mostly, if a host node was using the system that just died, it may pause briefly before re-connecting to one that is still running via a backup-server setting or your dns configs). While the system is still going with one node down, that node is falling behind and new disk writes, and the remaining ones are keeping track of what’s changing. Once you repair/recover/reboot the down node, it will rejoin the cluster. Now the recovered system has to catch up, and it does this by having the other two nodes send it the changes. In the meantime, gluster is serving any reads for that data from one of the up to date nodes, even if you ask the one you just restarted. In order to do this healing, it had to lock the files to ensure no changes are made while it copies a chunk of them over the recovered node. When it locks them, your hypervisor notices they have gone read-only, and especially if it has a pending write for that file, may pause the VM because this looks like a storage issue to it. Once the file gets unlocked, it can be written again, and your hypervisor notices and will generally reactivate your VM. You may see delays too, especially if you only have 1G networking between your host nodes while everything is getting copied around. And your files could be being locked, updated, unlocked, locked again a few seconds or minutes later, etc. That’s where sharding comes into play, once you have a file broken up into shards, gluster can get away with only locking the particular shard it needs to heal, and leaving the whole disk image unlocked. You may still catch a brief pause if you try and write the specific segment of the file gluster is healing at the moment, but it’s also going to be much faster because it’s a small chuck of the file, and copies quickly. Also, check out https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/server-quorum/, you probably want to set cluster.server-quorum-ratio to 50 for a replica-3 setup to avoid the possibility of split-brains. Your cluster will go write only if it loses two nodes though, but you can always make a change to the server-quorum-ratio later if you need to keep it running temporarily. Hope that makes sense of what’s going on for you, -Darrell
|
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users