Any update pls : Req details for "config as like striping with fail over" for many servers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/28/2011 08:16 AM, s.varadha rajan wrote:
> I would like to implement glusterfs 3.1 in my concern.We have around 10 servers
> and all are holding diff applications such as
> webserver(apache,Tomcat),vmware,DNS server....all the servers are having Diff
> disk capacity such as 1 TB,2 TB like that.All the systems are having
> Ubuntu 10.04
> 
> My Requirement:
> 
> 1.Need to connect all the servers through glusterfs 3.1.x
> 2.If i configured as Replication method, i am not getting Disk space.So i need
> to configure as like striping method.but glusterfs doesn't provide, fail over
> for this.
> 3.for e.x if i take
> server1:/data(120GB),server2:/data(120GB),server3:/home/g1=2TB,server4:/home/g2=2TB......i
> want to connect all the servers.so that i can get big storage space for all.if i
> go for striping or distributed, if one server fails, i can't access the volume
> and get an error as "Transport end point not connected"
> 
> Please let me know the solution and idea for config for this.i am searching in
> google for the past 10 days but no proper result.

My recommendation would be to take advantage of the fact that you can
serve multiple bricks from one physical machine.  First, divide the
space you'll be using on each machine into multiple bricks so that all
bricks in the cluster will be of approximately equal size.  Then, make
replica pairs containing bricks *on different nodes* and then distribute
across those.  Mostly that's a matter of carefully controlling the order
in which you specify those bricks in your "gluster volume create"
command.  The result should give you pretty good space utilization while
still protecting against a single machine failure.

Dividing up the space on a machine into several bricks can be a bit
tricky.  If there are several physical disks, that's great.  If you use
partitions, you will probably get poor performance as an even
distribution of work among bricks (something DHT tries to achieve) will
result in the disk heads thrashing between partitions.  If you use plain
old subdirectories you won't have that problem so much, but the
free-space reporting will be a bit inaccurate and could cause problems
when disks which are shared among several bricks become nearly full.
It's probably the best option overall, though, since it's easy to do and
will perform/behave pretty well the rest of the time.


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux