Re: Gluster infrastructure question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Heiko,

some years ago I had to deliver a reliable storage that should be easy to grow in size over time.
For that I was in close contact with
presto prime who produced a lot of interesting research results accessible to the public.
what was striking me was the general concern of how and when and with which pattern hard drives will fail,
and the rebuilding time in case a "big" (i.e. 2TB+) drive fails. (one of the papers at pp was dealing in detail with that)
From that background my approach was to build relatively small raid6 bricks (9 * 2 TB + 1 Hot-Spare)
and connect them together with a distributed glusterfs.
I never experienced any problems with that and felt quite comfortable about it.
That was for just a lot of big file data exported via samba.
At the same time I used another, mirrored, glusterfs as a storage backend for 
my VM-images, same there, no problem and much less hazel and headache than drbd and ocfs2 
which I run on another system.
hth
best 

Bernhard


*Ecologic Institute* Bernhard Glomm
IT Administration

Phone: +49 (30) 86880 134
Fax: +49 (30) 86880 100
Skype: bernhard.glomm.ecologic
Website: | Video: | Newsletter: | Facebook: | Linkedin: | Twitter: | YouTube: | Google+:
Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin | Germany
GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.: DE811963464
Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH


On Dec 9, 2013, at 2:18 PM, Heiko Krämer <hkraemer@xxxxxxxxxxx> wrote:

Signed PGP part
Heyho guys,

I'm running since years glusterfs in a small environment without big
problems.

Now I'm going to use glusterFS for a bigger cluster but I've some
questions :)

Environment:
* 4 Servers
* 20 x 2TB HDD, each
* Raidcontroller
* Raid 10
* 4x bricks => Replicated, Distributed volume
* Gluster 3.4

1)
I'm asking me, if I can delete the raid10 on each server and create
for each HDD a separate brick.
In this case have a volume 80 Bricks so 4 Server x 20 HDD's. Is there
any experience about the write throughput in a production system with
many of bricks like in this case? In addition i'll get double of HDD
capacity.

2)
I've heard a talk about glusterFS and out scaling. The main point was
if more bricks are in use, the scale out process will take a long
time. The problem was/is the Hash-Algo. So I'm asking me how is it if
I've one very big brick (Raid10 20TB on each server) or I've much more
bricks, what's faster and is there any issues?
Is there any experiences ?

3)
Failover of a HDD is for a raid controller with HotSpare HDD not a big
deal. Glusterfs will rebuild automatically if a brick fails and there
are no data present, this action will perform a lot of network traffic
between the mirror bricks but it will handle it equal as the raid
controller right ?



Thanks and cheers
Heiko



--
Anynines.com

Avarteq GmbH
B.Sc. Informatik
Heiko Krämer
CIO
Twitter: @anynines

----
Geschäftsführer: Alexander Faißt, Dipl.-Inf.(FH) Julian Fischer
Handelsregister: AG Saarbrücken HRB 17413, Ust-IdNr.: DE262633168
Sitz: Saarbrücken

<hkraemer.vcf>_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux