Re: GRUB warning after replacing disk drive in RAID1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peter, Reindl,

{ Convention on kernel.org is to reply-to-all, trim unneeded quoted
material, and bottom post or interleave.  Please do so. }

On 02/28/2017 06:15 PM, Peter Sangas wrote:

> cat /proc/mdstat
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
> [raid10] 

> md0 : active raid1 sdc1[3] sdb1[1] sda1[0]
>       19514368 blocks super 1.2 [3/3] [UUU]

Grub1 needs its boot partitions to use v0.90 or v1.0 superblocks.  Grub2
needs the md module in its core to boot from v1.1 or v1.2 superblocks.
Anyways, because the content of a v1.2 array does not
start at the beginning of the member devices, stupid grub doesn't
connect sd[abc]1 with your /boot mount and therefore delivers 'null'.
And then doesn't know how to link the core together.

Since this worked before, I would guess your grub was updated and its md
support was left out.  Hopefully someone with more grub experience can
chip in here -- I don't use any bootloader on my servers any more.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux