best practices? 500+ win/mac computers, gluster, samba, new SAN, new hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 01, 2012 at 12:21:17PM -0600, D. Dante Lorenso wrote:
>> >Gluster does give you the option of a "distributed replicated" volume, so
>> >you can get both the "RAID 0" and "RAID 1" functionality.
>> 
>> If you have 8 drives connected to a single machine, how do you
>> introduce those drives to Gluster?  I was thinking I'd combine them
>> into a single volume using RAID 0 and mount that volume on a box and
>> turn it into a brick.  Otherwise you have to add 8 separate bricks,
>> right?  That's not better is it?

>I'm in the process of building a pair of test systems (in my case 12 disks
>per server), and haven't quite got to building the Gluster layer, but yes 8
>separate filesystems and 8 separate bricks per server is what I'm suggesting
>you consider.
>
>Then you create a distributed replicated volume using 16 bricks across 2
>servers, added in the correct order so that they pair up and down
>(serverA:brick1 serverB:brick1 serverA:brick2 serverB:brick2 etc) - or
>across 4 servers or however many you're building.
>
>The advantage is that if you lose one disk, 7/8 of the data is still usable
>on both disks, and 1/8 is still available on one disk.  If you lose a second
>disk, there is a 1 in 15 chance that it's the mirror of the other failed
>one, but a 14 in 15 chance that you won't lose any data.  Furthermore,
>replacing the failed disk will only have to synchronise (heal) one disk
>worth of data.
>
>Now, if you decide to make RAID0 sets instead, then losing one disk will
>destroy the whole filesystem.  If you lose any disk in the second server you
>will have lost everything.  And when you replace the one failed disk, you
>will need to make a new filesystem across the whole RAID0 array and resync
>all 8 disks worth of data.
>
>I think it only makes sense to build an array brick if you are using RAID1
>or higher.  RAID1 or RAID10 is fast but presumably you don't want to store 4
>copies of your data, 2 on each server.  The write performance of RAID5 and
>RAID6 is terrible.  An expensive RAID card with battery-backed write-through
>cache will make it slightly less terrible, but still terrible.
>
>Regards,
>
>Brian.

I would like to second Brian's suggestions. I have almost exactly this setup 
and it has worked perfectly for well over a year. The added benefit is that 
you get exactly 50% of the total storage. If you distribute across RAID5/6
arrays you get significantly less than that (i.e. RAID5 costs you 1 disk and
RAID6 costs you two disks for each array).

Larry Bates
vitalEsafe, Inc.




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux