Centos 4.2 and Boot/Root on RAID?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



[ I really dislike these discussions because it is often
opinions that are based on limited viewpoints.  I've used a
lot of software and hardware approaches over many different
platforms and many different systems, and what I repeatedly
see is absolutes applied when they are not applicable to many
vendors. ]

Benjamin Smith <lists@xxxxxxxxxxxxxxxxxx> wrote:
> I've not yet tried Software RAID 1 with Centos 4.x but I've
> done so with Fedora Core 1 / X86-32 so I'd assume that my
> comments would apply. 

Just be wary of changes in MD and/or LVM/LVM2.

> I tend to prefer software RAID simply because then I'm not
> locked to a specific vendor/controller.

With RAID-1 (and not even block-striped RAID-0 or 10),
several vendors don't "lock you in."  Not only can you
typically read the disk label on the "raw" disk, but there is
support for reading volumes of different drives.

In fact, this is how LVM2+DM (DeviceMapper) is adding support
for FRAID in kernel 2.6.

> If a hardware failure occurs that takes out the controller
> but leaves at least one of the HDDs ok, I can take one
> software RAID HDD, stick it into another controller, and
> have a working system in very short order.

So can I, and I have done so when I didn't have a 3Ware
Escalade or equivalent FRAID card around.

> Hardware RAID frequently does not have this advantage. 

That is an absolutely _false_ technical statement with
regards to _several_ vendors.  Please stop "blanket covering"
all "Hardware RAID" with such absolutes.

> When I've set up RAID, I did so with the RH installer, and
> have always picked  RAID1.

I'm a huge fan of RAID-1 and RAID-10.

> (RAID5 is a joke for SW RAID)

Agreed.  The newer Opteron systems help as long as they have
an excellent I/O design, but that loads much of the
interconnect doing just I/O operations for the writes (let
alone during rebuilds) -- loads that could be doing data
services.

> I've set up a number of RAID installs with "boot/root" and
> extensions using the Software RAID howto. (google it) 

And I have as well.  Unfortunately, the main concern is
headless/remote recovery when the disk fails.  Installing the
MBR and bootstrap so it can boot from another device when the
BIOS still sees the original, yet failed, disk is the issue.

Until the LVM2+DM work supports more FRAID chips/cards to
overcome the BIOS mapping issue (not likely until the FRAID
vendors recognize and support the DM work), I still prefer at
$100 3Ware Escalade.

> Experimentally, I've set up a RAID array, removed one
> drive, booted, shutdown, and then replaced it with the
> other.

As have I, on non-x86/non-Linux architectures as well as
Linux.  But if you have a headless/remote system, and the
first drive fails, that doesn't solve the issue the BIOS
mapping.

> Both drives booted fine, so there doesn't appear to be
> any particular issue with grub.

As long as you have physical access to the system.

> When done, I had to resync the drives (again, see the
> Software RAID howto) 

I prefer autonomous operation.  It's worth $100 IMHO.

> The only time I ran into trouble is that when you set up a
> RAID array, you have to have all the partitions installed
> on the machine at setup time.

_Not_ true with even software RAID!

If you aren't using LVM, then yes, you have to pre-partition.
 But even then, you can define new MD slices.

But if you are using LVM/LVM2 (whether LVM/LVM2 is atop of a
MD setup, or you create MD slices in LVM/LVM2 extents), you
can dynamically create slices, fileystems, etc... without
bringing down the box.

> It seems you can't add active partitions after the fact.

I think you're mixing the fact that it is difficult to
"resize" MD slices with adding "active" partitions.  Those
are more limitations with the legacy BIOS/DOS disk label than
Linux MD, which LVM/LVM2 solves nicely.

[ Just like LDM Disk Labels solve for Windows NT5+ (2000+) ]
 
> Other than that, in 5 cases, it's been basically perfect
> for me, and I plan to deploy Centos 4.x/Software
> RAID/Boot-root again sometime next month. 

As have I.  But at the same time, I find that putting in a
$100 3Ware card has saved my butt.

Like the time the first disk failed 1,000 miles away, and the
BIOS was still mapping the primary disk which it couldn't
boot from.

Since then, I have refused to put in a co-located box without
a 3Ware Escalade 700x-2 or 800x-2 card.  The system has to be
able to boot without local modification.


-- 
Bryan J. Smith                | Sent from Yahoo Mail
mailto:b.j.smith@xxxxxxxx     |  (please excuse any
http://thebs413.blogspot.com/ |   missing headers)

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux