Re: CentOS 7 grub.cfg missing on new install
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
----- Original Message -----
From: "Gordon Messmer" <gordon.messmer@xxxxxxxxx>
To: "CentOS mailing list" <centos@xxxxxxxxxx>
Cc: "Jeff Boyce" <jboyce@xxxxxxxxxxxxxxx>
Sent: Thursday, December 11, 2014 9:45 AM
Subject: Re: CentOS 7 grub.cfg missing on new install
On 12/10/2014 10:13 AM, Jeff Boyce wrote:
The short story is that got my new install completed with the
partitioning I wanted and using software raid, but after a reboot I ended
up with a grub prompt, and do not appear to have a grub.cfg file.
...
I initially created the sda[1,2] and sdb[1,2] partitions via GParted
leaving the remaining space unpartitioned.
I'm pretty sure that's not necessary. I've been able to simply change the
device type to RAID in the installer and get mirrored partitions. If you
do your setup entirely in Anaconda, your partitions should all end up
fine.
It may not be absolutely necessary, but it appears to me to be the only way
to get to my objective. The /boot/efi has to be on a separate partition,
and it can not be on a RAID device. The /boot can be on LVM according to
the documentation I have seen, but Anaconda will give you an error and not
proceed if it is. Someone pointed this out to me a few days ago, that this
is by design in RH and CentOS. And within the installer I could not find a
way to put /boot on a non-LVM RAID1 while the rest of my drive is setup
with LVM RAID1. So that is when I went to GParted to manually setup the
/boot/efi and /boot partitions before running the installer.
At this point I needed to copy my /boot/efi and /boot partitions from
sda[1,2] to sdb[1,2] so that the system would boot from either drive, so
I issued the following sgdisk commands:
root# sgdisk -R /dev/sdb1 /dev/sda1
root# sgdisk -R /dev/sdb2 /dev/sda2
root# sgdisk -G /dev/sdb1
root# sgdisk -G /dev/sdb2
sgdisk manipulates GPT, so you run it on the disk, not on individual
partitions. What you've done simply scrambled information in sdb1 and
sdb2.
The correct way to run it would be
# sgdisk -R /dev/sdb /dev/sda
# sgdisk -G /dev/sdb
Point taken, I am going back to read the sgdisk documentation again. I had
assumed that this would be a more technically accurate way to copy sda[1,2]
to sdb[1,2] rather than using dd as a lot of how-to's suggest.
However, you would only do that if sdb were completly unpartitioned. As
you had already made at least one partition on sdb a member of a RAID1
set, you should not do either of those things.
The entire premise of what you're attempting is flawed. Making a
partition into a RAID member is destructive. mdadm writes its metadata
inside of the member partition. The only safe way to convert a filesystem
is to back up its contents, create the RAID set, format the RAID volume,
and restore the backup. Especially with UEFI, there are a variety of ways
that can fail. Just set up the RAID sets in the installer.
I need some additional explanation of what you are trying to say here, as I
don't understand it. My objective is to have the following layout for my
two 3TB disks.
sda1 /boot/efi
sda2 /boot
sda3 RAID1 with sdb3
sdb1 /boot/efi
sdb2 /boot
sdb3 RAID1 with sda3
I just finished re-installing using my GParted prepartitioned layout and I
have a bootable system with sda1 and sda2 mounted, and md127 created from
sda3 and sdb3. My array is actively resyncing, and I have successfully
rebooted a couple of times without a problem. My goal now it to make sdb
bootable for the case when/if sda fails. This is the process that I now
believe I failed on previously, and it likely has to do with issueing the
sgdisk command to a partition rather than a device. But even so, I don't
understand why it would have messed with my first device that had been
bootable.
I then installed GRUB2 on /dev/sdb1 using the following command:
root# grub2-install /dev/sdb1
Results: Installing for x86_64-efi platform. Installation finished.
No error reported.
Again, you can do that, but it's not what you wanted to do. GRUB2 is
normally installed on the drive itself, unless there's a chain loader that
will load it from the partition where you've installed it. You wanted to:
# grub2-install /dev/sdb
Yes, I am beginning to think this is correct, and as mentioned above am
going back to re-read the sgdisk documentation.
I rebooted the system now, only to be confronted with a GRUB prompt.
I'm guessing that you also constructed RAID1 volumes before rebooting,
since you probably wouldn't install GRUB2 until you did so, and doing so
would explain why GRUB can't find its configuration file (the filesystem
has been damaged), and why GRUB shows "no known filesystem detected" on
the first partition of hd1.
If so, that's expected. You can't convert a partition in-place.
Looking through the directories, I see that there is no grub.cfg file.
It would normally be in the first partition, which GRUB cannot read on
your system.
So following the guidance I had I issued the following commands in grub
to boot the system.
grub# linux /vmlinuz -3.10.0-123.el7.x86_64 root=/dev/sda2 ro
grub# initrd /initramfs-3.10.0-123.el7.x86_64.img
grub# boot
Unfortunately the system hung on booting, with the following information
in the "journalctl" file:
# journalctl
Not switching root: /sysroot does not seem to be an OS tree.
/etc/os-release is missing.
On your system, /dev/sda2 is "/boot" not the root filesystem. Your
"root=" arg should refer to your root volume, which should be something
like "root=/dev/mapper/vg_jab-hostroot". dracut may also need additional
args to initialize LVM2 volumes correctly, such as
"rd.lvm.lv=vg_jab/hostroot". If you had encrypted your filesystems, it
would also need the uuid of the LUKS volume.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos
[Index of Archives]
[CentOS]
[CentOS Announce]
[CentOS Development]
[CentOS ARM Devel]
[CentOS Docs]
[CentOS Virtualization]
[Carrier Grade Linux]
[Linux Media]
[Asterisk]
[DCCP]
[Netdev]
[Xorg]
[Linux USB]
|