Re: RAIDs and JBOD?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



James Bensley wrote:
> Lets say I have three drives "knocking" around which are all 1TB SATA
> II drives but each made by a different manufacturer. I am going to
> guess that these couldn't be used in a RAID 5? Or could they?

RAID is a manufacturer independent concept. Though depending on
who you ask it is typically good to have the same drive and
model# in the array, mainly so they have very similar if not
identical performance characteristics, if you have some drives
that are faster than others then performance won't be consistent.


> However could a similar result of 2TBs of data with redundancy be
> achieved with JBOD?

There's a couple of ways of interpreting JBOD, in my experience the
most common way is referring to JBOD as a shelf of dumb disks, often
times fiber attached, here is an example of such a system -

http://www.infortrend.com/main/2_product/es_f16f-r2j2_s2j2.asp

Another way of interpreting it is presenting a bunch of disks without
any sort of RAID protection to the OS, either individually or
in a concatenated group(set by the host controller).

> Also regarding RAID 5, three drives of data to one for parity is the
> max ratio I believe? I.e. to expand this by adding another data drive,
> the original parity drive would no longer cover this and another would
> be required, is this correct?

Depends on the implementation, I can't speak for linux software
RAID but it is not too uncommon to have 5 data, 1 parity(5+1), or
8+1, and some even go as high as 12+1 or even higher(shudder). The
higher the ratio generally the lower the performance especially
on writes, and disk rebuilds will take far longer with bigger
ratios, resulting in a better chance of a double disk failure during
the rebuild.

> hardware RAID). With a software RAID is this still achievable?

If the hardware supports it yes. Some controllers don't support
hot swap well, especially older ones, and if you yank a drive
while the system is running it could crash the system/reboot the
box/hang the I/O. But it certainly is possible, just be sure to
test it out before putting it into production.

If it was me I would go for a 3Ware RAID card, and do it right,
only time I might use software RAID these days is if it is
RAID 0, which I haven't done since probably 2001. Was considering
it for some new web servers because it doesn't matter if a
disk dies if we lose the whole box, performance was the most
important. But we ended up going with hardware raid 1+0 anyways.

nate


_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux