Re: Gluster infrastructure question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi guys,

thanks for all these reports. Well, I think I'll change my Raid level
to 6 and let the Raid controller build and rebuild all Raid members
and replicate again with glusterFS. I get more capacity but I need to
check if the write throughput acceptable.

I think, I can't take advantage of using glusterFS with a lot of
Bricks because I've found more cons as pros in my case.

@Ben thx for this very detailed document!


Cheers and Thanks
Heiko


On 10.12.2013 00:38, Dan Mons wrote:
> On 10 December 2013 08:09, Joe Julian <joe@xxxxxxxxxxxxxxxx>
> wrote:
>> Replicas are defined in the order bricks are listed in the volume
>> create command. So gluster volume create myvol replica 2
>> server1:/data/brick1 server2:/data/brick1 server3:/data/brick1
>> server4:/data/brick1 will replicate between server1 and server2
>> and replicate between server3 and server4.
>> 
>> Bricks added to a replica 2 volume after it's been created will
>> require pairs of bricks,
>> 
>> The best way to "force" replication to happen on another server
>> is to just define it that way.
> 
> Yup, that's understood.  The problem is when (for argument's sake)
> :
> 
> * We've defined 4 hosts with 10 disks each * Each individual disk
> is a brick * Replication is defined correctly when creating the
> volume initially * I'm on holidays, my employer buys a single node,
> configures it brick-per-disk, and the IT junior adds it to the
> cluster
> 
> All good up until that final point, and then I've got that fifth
> node at the end replicating to itself.  Node goes down some months
> later, chaos ensues.
> 
> Not a GlusterFS/technology problem, but a problem with what
> frequently happens at a human level.  As a sysadmin, these are also
> things I need to work around, even if it means deviating from best
> practices. :)
> 
> -Dan _______________________________________________ Gluster-users
> mailing list Gluster-users@xxxxxxxxxxx 
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 

- -- 
Anynines.com

Avarteq GmbH
B.Sc. Informatik
Heiko Krämer
CIO
Twitter: @anynines

- ----
Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
Sitz: Saarbrücken
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJSptoTAAoJELxFogM4ixOFJTsIAJBWed3AGiiI+PDC2ubfboKc
UPkMc+zuirRh2+QJBAoZ4CsAv9eIZ5NowclSSby9PTq2XRjjLvMdKuI+IbXCRT4j
AbMLYfP3g4Q+agXnY6N6WJ6ZIqXQ8pbCK3shYp9nBfVYkiDUT1bGk0WcgQmEWTCw
ta1h17LYkworIDRtqWQAl4jr4JR4P3x4cmwOZiHCVCtlyOP02x/fN4dji6nyOtuB
kQPBVsND5guQNU8Blg5cQoES5nthtuwJdkWXB+neaCZd/u3sexVSNe5m15iWbyYg
mAoVvlBJ473IKATlxM5nVqcUhmjFwNcc8MMwczXxTkwniYzth53BSoltPn7kIx4=
=epys
-----END PGP SIGNATURE-----
begin:vcard
fn;quoted-printable:Heiko Kr=C3=A4mer
n;quoted-printable:Kr=C3=A4mer;Heiko
org:Avarteq GmbH;Berlin
adr:;;Prinzessinnenstr. 20;Berlin;Berlin;10969;Germany
email;internet:hkraemer@xxxxxxxxxxxx
title:CIO
x-mozilla-html:TRUE
url:www.anynines.com
version:2.1
end:vcard

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux