Re: Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, 10 Sep 2020 at 21:53, Gionatan Danti <g.danti@xxxxxxxxxx> wrote:
Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto:
> I'm setting up GlusterFS on 2 hw w/ same configuration, 8  hdds. This
> deployment will grow later on.

Hi, I really suggest avoiding a replica 2 cluster unless it is for
testing only. Be sure to add an arbiter at least (using a replica 2
arbiter 1 cluster).

> I'm undecided between these different configurations and am seeing
> comments or advice from more experienced users of GlusterFS.
>
> Here is the summary of 3 options:
> 1. 1 brick per host, Gluster "distributed" volumes, internal
> redundancy at brick level

I strongly suggest against it: any server reboot will cause troubles for
mounted clients. I will end with *lower* uptime than a single server.

> 2. 1 brick per drive, Gluster "distributed replicated" volumes, no
> internal redundancy

This would increase Gluster performance via multiple bricks; however a
single failed disk will put the entire note out-of-service. Moreover,
Gluster heals are much slower processes than a simple RAID1/ZFS mirror
recover.

can you explain better how a single disk failing would bring a whole node out of service?

from your comments this one sounds the best, but having node outages from single disk failures doesn’t sound acceptable..




> 3. 1 brick per host, Gluster "distributed replicated" volumes, no
> internal redundancy

Again, a suggest against it: a single failed disk will put the entire
note out-of-service *and* will cause massive heal as all data need to be
copied from the surviving node, which is a long and stressful event for
the other node (and for the sysadmin).

In short, I would not use Gluster without *both* internal and
brick-level redundancy. For a simple setup, I suggest option #1 but in
replica setup (rather than distributed). You can increase the number of
briks (mountpoint) via multiple zfs datasets, if needed.



Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
--
Miguel Mascarenhas Filipe
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux