Re: Bootable Raid-1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 2, 2011 at 4:53 AM, Leslie Rhorer <lrhorer@xxxxxxxxxxx> wrote:
>  I recall reading very recently (it might have even been today) that Linux RAID Autodetect partitions can cause problems.  I have mine set to simply "Linux".

I haven't come across this at all, and have never had any problem
booting even older Linux kernels in RAID1 arrays using grub2.

>> >> I am using grub as bootloader.

You didn't specify grub2, and if you're using grub legacy, that's a
real pain, not so much to get working, but to keep working if you have
to reconfigure later on. I recommend going with grub 2, IMO it's very
much ready for production use now - keeping in mind it's "just a
bootloader", and you can always use Super Grub2 live CD to get things
going in a pinch.

I've found that many OS installation routines mess up the
partitioning/RAID creation, so I'll often set things up ahead of time
with a sysadmin Live CD (see below) so the block devices I want to use
are all available before I start the target OS installation itself.

You're not actually partitioning the mdX block device are you? I've
always set up my component partitions, created the array using those
partitions, and then created the filesystem on the mdX.

All of my systems now boot via grub2 into RAID1s. I usually set up my
partitioning so that every drive is exactly the same, the first
primary is used as the boot RAID1, and the RAID5/6 component
partitions used for the main storage Lvs are often logical inside
extended for flexibility. This allows me to have multiple
"rescue/recovery/maintenance" OSs available to boot from (System
Rescue, Grml, Ubuntu for the grub2) installed right on the HDD -
lately I've been able to get grub2 to boot directly from on-disk ISO
images rather than having to do any actual install for these. Another
advantage is that I can boot from any component drive and get the same
config/menu, don't have to worry about drive order when swapping out
hardware, or even moving an array to another box.

Since most current "production-level" server OSs still use legacy
grub, I let it go ahead and do whatever it wants to do to setup
booting from its RAID1 array, and when it's all done if necessary I
then restore grub2 to the MBR(s) and adapt the grub1/lilo/whatever
code from the target OS to my grub2 configuration. Lately I've been
dedicating a partition/array to grub2 itself, so I'm not dependent on
a particular OS for maintenance.

I used to chainload into the partition-boot-sector MBR to load the
grub legacy (or lilo or whatever) menu, but found it better to just
boot the production OS directly from grub2 using regular
linux-kernel/initrd-image statements. The only downside to the latter
is that when the production OS gets a kernel upgrade, I have to
remember to update the grub2 config myself, again adapting the new
lines generated by the upgrade process.

I hope that helps, let me know if you need more detail, learning links etc.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux