Re: UEFI and mdadm questions.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Oct 6, 2014, at 11:58 AM, Phil Turmel <philip@xxxxxxxxxx> wrote:

> On 10/05/2014 04:22 PM, Chris Murphy wrote:
>> 
>> On Oct 5, 2014, at 2:18 PM, Phil Turmel <philip@xxxxxxxxxx> wrote:
> 
> [trim /]
> 
>>> If your BIOS can be configured to try multiple boot images, it
>>> should be possible to have true raid fallback without using
>>> motherboard or hardware raid.  (Set up md raid1 with metadata v1.0
>>> of multiple copies of the EFI FAT partition.)  I've been meaning to
>>> try this….
>> 
>> Problems with this: a.) new Windows 8 hardware might require you boot
>> Windows to get to the feature enabling the firmware setup, because on
>> such hardware USB isn't initialized by default.
>> http://mjg59.dreamwidth.org/24869.html
>> 
>> I don't know why we don't have free software to initiate this, but I
>> haven't come across it so far.
> 
> Good to know, but totally immaterial to the boot sequence I'm
> recommending.  Boot linux of of an EFI FAT and let linux initialize the
> USB hardware in its own good time.

Sure but it means that you can't use the firmware's boot manager to choose anything else. Any other kernel (or copy) only happens in the course of the firmware's fallback mechanisms.

> 
> Matthew Garrett's post is really all about how to get linux into the
> Win8 box in the first place.  Once there, manipulate the boot sequence
> as you please.
> 
>> b.) There's no guarantee the firmware won't write to the ESP, thus
>> rendering the individual md raid members out of sync and without
>> their metadata being updated, i.e. in effect, the logical device they
>> become later, is corrupt. Separately they aren't corrupt, merely out
>> of sync, but you don't have an obvious way of knowing which one.
> 
> This is a very good point.  In fact, I withdraw my recommendation to
> raid these partitions.  Simply have one on every disk the BIOS could
> possibly boot from, and place an EFI bootable kernel in each one (with
> embedded initramfs).

Well, to me that seems more esoteric than just having kernel+initramfs on a conventional md raid1 /boot volume, and having a static (never upgraded or modified) GRUB2 or syslinux point to the modifiable configuration file (basically load a 2nd config file) in the usual location also on /boot. So now kernel upgrades are normal, and the user gets to regress to older kernels on demand should it be necessary and they also get resilient boot.

> 
>> c.) strictly speaking any partition with mdadm metadata should have
>> the linux raid partition type GUID set; not the EFI System partition
>> type GUID. Those GUIDs are mutually exclusive.
> 
> The former is not true at all--mdadm does not care *at all* what
> partition types are set.  Grub might care, but it's moot if you don't
> use Grub.  :-)

The partition type isn't for mdadm, it's for other things that might otherwise modify the partition if it's misidentified as an EFI System partition. And it would be a misidentification to set the partition type GUID to EFI System partition, because first this example partition is an md member, and only after assembly is it an EFI system partition. So I still consider them mutually exclusive.

> 
>> This is why I'm still not a fan of using mdadm to raid1 an EFI System
>> partition.
> 
> One further point:  the failure decision tree is nicer if you boot
> directly into a kernel.
> 
> 1) Bios locates and attempts to boot from 1st configured kernel image
> 2a) Corrupted image or other disk error blocks complete load of kernel
> image--bios moves to next EFI choice (possibly on a different disk).
> 2b) Successful EFI kernel load, boot encounters missing/corrupt root
> FS--kernel drops to initramfs rescue shell
> 
> versus:
> 
> 1) Bios locates and attempts to boot from 1st configured grub image
> 2a) Corrupted image or other disk error blocks complete load of
> grub--bios moves to next EFI choice (possibly on a different disk).
> 2b) Successful EFI grub load, grub encounters corrupt config or grub
> module--drop to grub shell
> 2c) Successful EFI grub load, kernel & initramfs load by grub, boot
> encounters missing/corrupt root FS--kernel drops to initramfs rescue shell
> 
> I haven't had time to set it up yet, but the clear reduction in points
> of failure is compelling.  Faster boot is just icing on the cake.

But we have a lot of experience with the latter, as that's how it's always been on BIOS systems. In a way it's more complicated because BIOS itself didn't really have a meaningful (or complicated) fallback mechanism, that was just up to the boot manager which we (FOSS) control. But with UEFI we don't control this, so exactly how the firmware behaves in failure cases actually needs to be tested on a firmware by firmware basis.

The other thing is this arrangement isn't supported by any distro currently. So setting it up and maintaining is pretty cumbersome.


Chris Murphy--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux