Bricks suggestions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm doing software raid-0 with a  cluster volume at replica 2 across 2 nodes (essentially getting raid 10, i hope).  The OS will monitor the software raid and email root when it becomes degraded.  then i'll take the whole NODE out of the volume, fix the software raid, then bring it back in.  that's the plan.  
haven't tested it yet.

On Apr 29, 2012, at 4:18 PM, Brian Candler wrote:

> On Sat, Apr 28, 2012 at 11:25:30PM +0200, Gandalf Corvotempesta wrote:
>>   I'm also considering no raid at all.
>> 
>>   For example, with 2 server and 8 SATA disk each, I can create a single
>>   XFS filesystem for every disk and then creating a replicated bricks for
>>   each.
>> 
>>   For example:
>> 
>>   server1:brick1 => server2:brick1
>> 
>>   server1:brick2 => server2:brick2
>> 
>>   and so on.
>> 
>>   After that, I can use these bricks to create a distributed volume.
>> 
>>   In case of a disk failure, I have to heal only on disk at time and not
>>   the whole volume, right?
> 
> Yes. I considered that too. What you have to weigh it up against is the
> management overhead:
> 
> - recognising a failed disk
> - replacing a failed disk (which involves creating a new XFS filesystem
>  and mounting it at the right place)
> - forcing a self-heal
> 
> Whereas detecting a failed RAID disk is straightforward, and so is swapping
> it out.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux