Re: What RAID type and why?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 7, 2010 at 12:22 AM, Keld Simonsen <keld@xxxxxxxxxx> wrote:
> On Sun, Mar 07, 2010 at 03:10:18AM -0500, Guy Watkins wrote:
>> } -----Original Message-----
>> } From: Keld Simonsen [mailto:keld@xxxxxxxxxx]
>> } Sent: Sunday, March 07, 2010 3:07 AM
>> } To: Neil Brown
>> } Cc: Guy Watkins; 'Greg Freemyer'; 'Mark Knecht'; 'Linux-RAID'
>> } Subject: Re: What RAID type and why?
>> }
>> } On Sun, Mar 07, 2010 at 01:21:13PM +1100, Neil Brown wrote:
>> } > On Sat, 06 Mar 2010 18:17:44 -0500
>> } > "Guy Watkins" <linux-raid@xxxxxxxxxxxxxxxx> wrote:
>> } >
>> } > > }
>> } > > } At a minimum I would build a 3-disk raid 6.  raid 6 does a lot of
>> } i/o
>> } > > } which may be a problem.
>> } > >
>> } > > If he only needs 3 drives I would recommend RAID1.  Can still loose 2
>> } drives
>> } > > and you don't have the RAID6 I/O overhead.
>> } > >
>> } >
>> } > and as md/raid6 requires at least 4 drives, RAID1 is not just the best
>> } > solution to survive two failures on a 3-device array, it is the only
>> } solution.
>> }
>> } Raid10 can also do it.
>> }
>> } raid1 is in many ways obsolete and you should rather use raid10,
>> } which in my eyeys is just another way of doing the same conceptual thing
>> } as raid1.
>> }
>> } Best regards
>> } keld
>>
>> Are you sure RAID10 can loose 2 of 3 drives?  I did not think it worked that
>> way.  I thought RAID10 maintained 2 copies, not 3.  But I have never used
>> RAID10.
>
> If you ask mdadm to do it, yes. Example:
>
> mdadm --create /dev/md3 --chunk=256 -R -l 10 -n 3 -p f3 /dev/sd[abc]1
>
> the "-p f3" is the one that asks to have 3 copies.
>
> best regards
> keld
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

Yes, that way would work, except in that case it would use more
complicated methods to split up the stripes among the drives.  Since
you're application seems to be read heavy, I agree with using 'far'
for the stripe method.

However the dis-advantage of mdadm raid10 has been two-fold compared
to raid1 (until kernel 2.6.33+).
1) Fixed in 2.6.33: Striped storage did not previously support
write-barriers (required for atomic write mechanisms/journals).
2) Still unsupported? : Reshape of raid10 arrays.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux