Re: Replica 3 scale out and ZFS bricks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It is not usual to add a single node , as there will be a lot of rebuilding which takes a lot of time for large bricks.
Usually RH recommend having a brick like this:
- 12 disks (2-3TB) in  RAID6 
- 10 disks in RAID10

I see many users use 10TB+ disks but this leads to very long healing times , so keep that in mind.

You can check https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance for more details.

Best Regards,
Strahil Nikolov






В петък, 18 септември 2020 г., 17:01:30 Гринуич+3, Alexander Iliev <ailiev+gluster@xxxxxxxxx> написа: 





On 9/17/20 4:47 PM, Strahil Nikolov wrote:

>   I guess I misunderstood you - if I decode the diagram correctly it should be OK , you will always have at least 2 bricks available after a node get's down.
> 
> It would be way simpler if you add a 5th node (VM probably) as an arbiter and switch to 'replica 3 arbiter 1'.


Yep, I would add an arbiter node in this case.

What I wanted to make sure was my understanding of the way GlusterFS is 
able to scale is correct. Specifically expanding a volume by adding one 
storage node to the current setup.

Thanks, Strahil.

Best regards,
--
alexander iliev

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux