Re: design of gluster cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I dont have any more hosts available.

I am a bit lost here, why a replica 3 and arbiter 1?  ie not replica2 arbiter1?   also no distributed part?  is the distributed flag automatically assumed?    with a replica3 then there is a quorum (2 of 3) so no arbiter is needed?   I have this running already like this so I am assuming its robust?

I am still struggling to undersatnd the syntax, I wish the docs / examples were better.

So on each gluster node I have an-unused 120gb data1 partition which is left over from the OS install so the arbiter volume could go here?

in which case?

gluster volume create my-volume replica 2 arbiter 1 host1:/path/to/brick host2:/path/to/brick (arb-)host3:/path/to/brick2   host4:/path/to/brick host5:/path/to/brick (arb-)host6:/path/to/brick2 host3:/path/to/brick host6:/path/to/brick (arb-)host1:/path/to/brick2

is this a sane command?

Otherwise maybe I am beginning to think I am better off doing 3 x 2TB separate volumes.  rather interesting trying to understand this stuff...!


 

On 12 June 2018 at 23:10, Dave Sherohman <dave@xxxxxxxxxxxxx> wrote:
On Tue, Jun 12, 2018 at 03:04:14PM +1200, Thing wrote:
> What I would like to do I think is a,
>
> *Distributed-Replicated volume*
>
> a) have 1 and 2 as raid1
> b) have 4 and 5 as raid1
> c) have 3 and 6 as a raid1
> d) join this as concatenation 2+2+2tb

You probably don't actually want to do that because quorum is handled
separately for each subvolume (bricks 1/2, 4/5, or 3/6), not a single
quorum for the volume as a whole.  (Consider if bricks 1 and 2 both went
down.  You'd still have 4 of 6 bricks running, so whole-volume quorum
would still be met, but the volume can't continue to run normally since
the first subvolume is completely missing.)

In the specific case of replica 2, gluster treats the first brick in
each subvolume as "slightly more than one", so you'd be able to continue
normally if brick 2, 5, or 6 went down, but, if brick 1, 4, or 3 went
down, all files on that subvolume would become read-only.

> I tried to do this and failed as it kept asking for an arbiter, which the
> docs simply dont mention how to do.

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/

> So say we have,
>
> a) have 1 and 2 as raid1 with 3 as the arbiter?
> b) have 4 and 5 as raid 1 with 6 as the arbiter
> c) 3 and 6 as a raid 1 with 5 as the arbiter
> d) join this as concatenation 2+2+2tb

I would recommend finding one or more other servers with small amounts
of unused space and allocating the arbiter bricks there, or carving a
gig or two out of your current bricks for that purpose.  Arbiters only
need about 4k of disk space per file in the subvolume, regardless of the
actual file size (the arbiter only stores metadata), so TB-sized
arbiters would be a huge waste of space, especially if you're only
putting a few very large files (such as VM disk images) on the volume.

As a real-world data point, I'm using basically the setup you're aiming
for - six data bricks plus three arbiters, used to store VM disk images.
My data bricks are 11T each, while my arbiters are 98G.  Disk usage for
the volume is currently at 19%, but all arbiters are under 1% usage (the
largest has 370M used).  Assuming my usage patterns don't change, I
could completely fill my 11T subvolumes and only need about 1.5G in the
corresponding arbiters.

> if so what is the command used to build this?

# gluster volume create my-volume replica 3 arbiter 1 host1:/path/to/brick host2:/path/to/brick arb-host1:/path/to/brick host4:/path/to/brick host5:/path/to/brick arb-host2:/path/to/brick host3:/path/to/brick host6:/path/to/brick arb-host3:/path/to/brick

--
Dave Sherohman
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux