On Fri, 2007-10-26 at 14:41 -0400, Doug Ledford wrote: > Actually, after doing some research, here's what I've found: > * When using grub2, there is supposedly already support for raid/lvm > devices. However, I do not know if this includes version 1.0, 1.1, or > 1.2 superblocks. I intend to find that out today. It does not include support for any version 1 superblocks. It's noted in the code that it should, but doesn't yet. However, the interesting bit is that they rearchitected grub so that any reads from a device during boot are filtered through the stack that provides the device. So, when you tell grub2 to set root=md0, then all reads from md0 are filtered through the raid module, and the raid module then calls the reads from the IO module, which then does the actual int 13 call. This allows the raid module to read superblocks, detect the raid level and layout, and actually attempt to work on raid0/1/5 devices (at the moment). It also means that all the calls from the ext2 module when it attempts to read from the md device are filtered through the md module and therefore it would be simple for it to implement an offset into the real device to get past the version 1.1/1.2 superblocks. In terms of resilience, the raid module actually tries to utilize the raid itself during any failure. On raid1 devices, if it gets a read failure on any block it attempts to read, then it goes to the next device in the raid1 array and attempts to read from it. So, in the event that your normal boot disk suffers a sector failure in your actual kernel image, but the raid disk is otherwise fine, grub2 should be able to boot from the kernel image on the next raid device. Similarly, on raid5 it will attempt to recover from a block read failure by using the parity to generate the missing data unless the array is already in degraded mode at which point it will bail on any read failure. The lvm module attempts to properly map extents to physical volumes and allows you to have your bootable files in lvm logical volume. In that case you set root=logical-volume-name-as-it-appears-in-/dev/mapper and the lvm module then figures out what physical volumes contain that logical volume and where the extents are mapped and goes from there. I should note that both the lvm code and raid code are simplistic at the moment. For example, the raid5 mapping only supports the default raid5 layout. If you use any other layout, game over. Getting it to work with version 1.1 or 1.2 superblocks probably wouldn't be that hard, but getting it to the point where it handles all the relevant setups properly would require a reasonable amount of coding. -- Doug Ledford <dledford@xxxxxxxxxx> GPG KeyID: CFBFF194 http://people.redhat.com/dledford Infiniband specific RPMs available at http://people.redhat.com/dledford/Infiniband
Attachment:
signature.asc
Description: This is a digitally signed message part