On Thu, 2020-08-13 at 05:49 -0400, Ashish Pandey wrote:
> With 4 nodes, yes it is possible to use disperse volume.> Redundancy count 2 is not the best but most often used as far as my
> interaction with users.> disperse volume with 4 bricks is also possible but it might not be a
> best configuration.> I would suggest to have 6 bricks and 4 +2 configuration> where 4 - Data bricks> and 2 - Redundant bricks, in other way maximum number of brick which
> can go bad while you can still use disperse volume.>> If you have number of disks on 4 nodes, you can create the 4 +2
> disperse volume in different way while maintaining the requirenment
> of EC (disperse volume)
Thank you for your reply. I finally received my 4th disk and I started to experiment with different modes.
But it seems like I can't do much with 4 bricks (and using them all). My idea was to have a 3+1 setup. So that one node (brick) can fail and everything still works without loosing the minimum quorum of 3.
But using disperse with redundancy doesn't accept this. At least one needs to be set for redundancy. But then the RMW (Read-Modify-Write) cycle is not efficient; 512 * (4-1) = 1536 bytes. Setting 2 disks for redundancy is not recommended in terms of split-brain scenarios. An uneven number needs to be configured, i.e. nog 2 or 4.
A replica set of 4 is also not allowed, since there has to be a majority in the quorum. So, an uneven number is required, which is not 4. Using arbiters makes no difference in this context (of course).
How would I best achieve a 3+1 setup? Because to maintain a running system without split-brain, I need at least 3 nodes. With 4, one should be able to fail. But the modes I've explored here do not seem to support that. So maybe there is an option to have a disk in standby?
Performance and disk efficiency are of course always nice too. But I'm wondering now if 4 disks is even possible at all.
On do, aug 13, 2020 at 05:49, Ashish Pandey <aspandey@xxxxxxxxxx> wrote:
From: "K. de Jong" <kees.dejong+lst@xxxxxxxxxx>
To: gluster-users@xxxxxxxxxxx
Sent: Thursday, August 13, 2020 11:43:03 AM
Subject: 4 node cluster (best performance + redundancy setup?)I posted something in the subreddit [1], but I saw the suggestionelsewhere that the mailinglist is more active.I've been reading the docs. And from this [2] overview the distributedreplicated [3] and dispersed + redundancy [4] sound the mostinteresting.Each node (Raspberry Pi 4, 2x 8GB and 2x 4GB version) has a 4TB HDdisk attached via a docking station. I'm still waiting for the 4thRaspberry Pi, so I can't really experiment with the intended setup. Butthe setup of 2 replicas and 1 arbiter was quite disappointing. I gotbetween 6MB/s and 60 MB/s, depending on the test (I did a broad rangeof tests with bonnie++ and simply dd). Without GlusterFS a simple dd ofa 1GB file is about 100+ MB/s. 100MB/s is okay for this cluster.My goal is the following:* Run a HA environment with Pacemaker (services like Nextcloud,Dovecot, Apache).* One node should be able to fail without downtime.* Performance and storage efficiency should be reasonable with thegiven hardware. So with that I mean, when everything is a replica thenstorage is stuck at 4TB. And I would prefer to have some more than thatlimitation, but with redundancy.However, when reading the docs about disperse, I see some interestingpoints. A big pro is "providing space-efficient protection against diskor server failures". But the following is interesting as well: "Thetotal number of bricks must be greater than 2 * redundancy". So, I wantthe cluster to be available when one node fails. And be able torecreate the data on a new disk, on that forth node. I also read aboutthe RMW efficiency, I guess 2 sets of 2 is the only thing that willwork with that performance and disk efficiency in mind. Because 1redundancy would mess up the RMW cycle.My questions:* With 4 nodes; is it possible to use disperse and redundancy? And is aredundancy count of 2 the best (and only) choice when dealing with 4disks?With 4 nodes, yes it is possible to use disperse volume.Redundancy count 2 is not the best but most often used as far as my interaction with users.disperse volume with 4 bricks is also possible but it might not be a best configuration.I would suggest to have 6 bricks and 4 +2 configurationwhere 4 - Data bricksand 2 - Redundant bricks, in other way maximum number of brick which can go bad while you can still use disperse volume.If you have number of disks on 4 nodes, you can create the 4 +2 disperse volume in different way while maintaining the requirenment of EC (disperse volume)* The example does show a 4 node disperse command, but has as output`There isn't an optimal redundancy value for this configuration. Do youwant to create the volume with redundancy 1 ? (y/n)`. I'm not sure ifit's okay to simply select 'y' as an answer. The output is a bit vague,because it says it's not optimal, so it will be just slow, but willwork I guess?It will not be optimal from the point of view of calculation which we make.You want to have a best configuration where yu can have maximum redundancy (failure tolerance) and also maximum storage capacity.In that regards, it will not be an optimal solution. Performance can also be a factor.* The RMW (Read-Modify-Write) cycle is probably what's meant. 512 *(#Bricks - redundancy) would be in this case for me 512 * (4-1) = 1536byes, which doesn't seem optimal, because it's a weird number, it's nota power of 2 (512, 1024, 2048, etc.). Choosing a replica of 2 wouldtranslate to 1024, which would seem more "okay". But I don't know forsure.Yes, you are right.* Or am I better off by simply creating 2 pairs of replicas (so nodisperse)? So in that sense I would have 8TB available, and one nodecan fail. This would provide some read performance benefits.* What would be a good way to integrate this with Pacemaker? With thatI mean, should I manage the gluster resource with Pacemaker? Or simplytry to mount the glusterfs, if it's not available, then dependingresources can't start anyway. So in other words, let glusterfs handlefailover itself.gluster can handle fail over on replica or disperse level as per its implementation.Even if you want to go for replica, it does not replica 2 does not look like a best option, you shouldgo for replica 3 or arbiter volume to have best fault tolerance.However, that will cost you a big storage capacity.Any advice/tips?[1][2][3][4]
________Community Meeting Calendar:Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-users mailing list Gluster-users@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-users