Re: What's the typical RAID10 setup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



the main problem is, you can lose one disk
if you lose any disk, you should replace all disks
and raid1 allow you to make it online (without shutdown your server)
that's the main use of raid1 (replica/mirror/redundance)


2011/1/31 Roberto Spadim <roberto@xxxxxxxxxxxxx>:
> the only way to make it safer, is put more devices on raid1
> for example:
> disks=6 (was wrong on last email)
> raid0= 1-2(a) 3-4(b) 5-6(c)
> raid1= a,b,c
>
> or
> raid1=1-2-3(a) 4-5-6(b)
> raid0=a,b
> now you can loose tree disks
>
> 2011/1/31 Roberto Spadim <roberto@xxxxxxxxxxxxx>:
>> rewriting..
>> using raid10 or raid01 you will have problems if you lose 2 drives too...
>> if you lose two raid 1 devices you loose raid 1...
>> see:
>>
>> disks=4
>> RAID 1+0
>> raid1= 1-2(A)  ; 3-4(B); 5-6(C)
>> raid0= A-B-C
>> if you lose (A,B or C) your raid0 stop
>>
>> RAID 0+1
>> raid0= 1-2-3(A)  ; 4-5-6(B)
>> raid1= A-B
>> if you lose (1,4 OR 1,5 OR 1,6 OR 2,4 OR 2,5 OR 2,6 OR 3,4 OR 4,5 OR
>> 4,6) your raid0 stop
>>
>> using raid1+0 or raid0+1 you can't lose two disks...
>>
>>
>>
>> 2011/1/31 Roberto Spadim <roberto@xxxxxxxxxxxxx>:
>>> do you have a faster array using raid0+1 or raid1+0?
>>>
>>> 2011/1/31 Roberto Spadim <roberto@xxxxxxxxxxxxx>:
>>>> hum that's right,
>>>> but not 'increase' (only if you compare raid0+1 betwen raid1+0) using
>>>> raid1 and after raid0 have LESS point of fail between raid 0 and after
>>>> raid 1, since the number of point of fail is proportional to number of
>>>> raid1 devices.
>>>>
>>>> 2011/1/31 Robin Hill <robin@xxxxxxxxxxxxxxx>:
>>>>> On Mon Jan 31, 2011 at 01:00:13PM -0200, Roberto Spadim wrote:
>>>>>
>>>>>> i think make two very big raid 0
>>>>>> and after raid1
>>>>>> is better
>>>>>>
>>>>> Not really - you increase the failure risk doing this.  With this setup,
>>>>> a single drive failure from each RAID0 array will lose you the entire
>>>>> array.  With the reverse (RAID0 over RAID1) then you require both drives
>>>>> in the RAID1 to fail in order to lose the array.  Of course, with a 4
>>>>> drive array then the risk is the same (33% with 2 drive failures) but
>>>>> with a 6 drive array it changes to 60% for RAID1 over RAID0 versus 20%
>>>>> for RAID0 over RAID1.
>>>>>
>>>>> Cheers,
>>>>>    Robin
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Roberto Spadim
>>>> Spadim Technology / SPAEmpresarial
>>>>
>>>
>>>
>>>
>>> --
>>> Roberto Spadim
>>> Spadim Technology / SPAEmpresarial
>>>
>>
>>
>>
>> --
>> Roberto Spadim
>> Spadim Technology / SPAEmpresarial
>>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>



-- 
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux