Re: Advice for running out of space on a replicated 4-brick gluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On February 18, 2020 1:16:19 AM GMT+02:00, Artem Russakovskii <archon810@xxxxxxxxx> wrote:
>Hi all,
>
>We currently have an 8TB 4-brick replicated volume on our 4 servers,
>and
>are at 80% capacity. The max disk size on our host is 10TB. I'm
>starting to
>think about what happens closer to 100% and see 2 options.
>
>Either we go with another new 4-brick replicated volume and start
>dealing
>with symlinks in our webapp to make sure it knows which volumes the
>data is
>on, which is a bit of a pain (but not too much) on the sysops side of
>things. Right now the whole volume mount is symlinked to a single
>location
>in the webapps (an uploads/ directory) and life is good. After such a
>split, I'd have to split uploads into yeardir symlinks, make sure
>future
>yeardir symlinks are created ahead of time and point to the right
>volume,
>etc).
>
>The other direction would be converting the replicated volume to a
>distributed replicated one
>https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/#creating-distributed-replicated-volumes,
>but I'm a bit scared to do it with production data (even after testing,
>of
>course), and having never dealt with a distributed replicated volume.
>
>1. Is it possible to convert our existing volume on the fly by adding 4
>   bricks but keeping the replica count at 4?
>2. What happens if bricks 5-8 which contain the replicated volume #2 go
>down for whatever reason or can't meet their quorum, but the replicated
>   volume #1 is still up? Does the whole main combined volume become
>unavailable or only a portion of it which has data residing on
>replicated
>   volume #2?
>   3. Any other gotchas?
>
>Thank you very much in advance.
>
>Sincerely,
>Artem
>
>--
>Founder, Android Police <http://www.androidpolice.com>, APK Mirror
><http://www.apkmirror.com/>, Illogical Robot LLC
>beerpla.net | @ArtemR
><http://twitter.com/ArtemR>

Distributed replicated sounds more reasonable.

Out of curiocity, why did you decide to have an even number of bricks in the replica - it can still suffer from split-brain?

1.  It should be OK, but I have never done it. Test on some VMs before proceeding.
Rebalance might take some time, so keep that in mind.

2.All files on replica 5-8 will be unavailable untill yoiu recover that set of bricks.

Best Regards,
Strahil Nikolov

________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/441850968

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux