Re: Device Mapper being derailed in tboot launch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 07, 2022 at 08:15:16AM -0400, Tony Camuso wrote:
> On 6/7/2022 5:57 AM, Bryn M. Reeves wrote:
> > On Mon, Jun 06, 2022 at 11:43:58AM -0400, Tony Camuso wrote:
> > > Successful bootlog snippet:
> > > 
> > > [    3.843911] sd 5:0:0:0: [sda] Attached SCSI disk
> > > [    3.848370] sd 6:0:0:0: [sdb] Attached SCSI disk
> > > [    3.925639] md126: detected capacity change from 0 to 1900382519296
> > > [    3.946307]  md126: p1 p2 p3
> > 
> > Are the MD array partitions being used as the PVs for the rhel_lenovo
> > volume group? It's the major difference in the two snippets other than
> > timing, and would account for why the volume group cannot be discovered
> > in the tboot case.
> 
> It would appear from the respective grub command lines that they are.
> See below.

OK great - that explains why the LVM devices are timing out in the tboot
case. 

> ======================================================================
> Here is the kernel command line in grub for the normal boot (succeeds)
> ----------------------------------------------------------------------
> 
> set gfx_payload=keep
> insmod gzio
> linux ($root)/vmlinuz-4.18.0-348.el8.x86_64 root=/dev/mapper/rhel_lenovo--st25\
> 0v2--02-root ro crashkernel=auto resume=/dev/mapper/rhel_lenovo--st250v2--02-s\
> wap rd.md.uuid=8061c4cf:06de8a59:a9eefb7e:3edb011a rd.md.uuid=549c2ba4:1e03463\
> b:d429e75b:398c67a3 rd.lvm.lv=rhel_lenovo-st250v2-02/root rd.lvm.lv=rhel_lenov\
> o-st250v2-02/swap console=ttyS0,115200N81
> initrd  ($root)/initramfs-4.18.0-348.el8.x86_64.img $tuned_initrd
> 
> =============================================================
> And here is the kernel command line in grub for tboot (fails)
> -------------------------------------------------------------
> 
>         echo        'Loading tboot 1.10.5 ...'
>         multiboot2        /tboot.gz logging=serial,memory,vga
>         echo        'Loading Linux 4.18.0-348.el8.x86_64 ...'
>         module2 /vmlinuz-4.18.0-348.el8.x86_64 root=/dev/mapper/rhel_lenovo--s\
> t250v2--02-root ro crashkernel=auto resume=/dev/mapper/rhel_lenovo--st250v2--0\
> 2-swap rd.md.uuid=8061c4cf:06de8a59:a9eefb7e:3edb011a rd.md.uuid=549c2ba4:1e03\
> 463b:d429e75b:398c67a3 rd.lvm.lv=rhel_lenovo-st250v2-02/root rd.lvm.lv=rhel_le\
> novo-st250v2-02/swap console=ttyS0,115200N81 intel_iommu=on noefi
>         echo        'Loading initial ramdisk ...'
>         module2 /initramfs-4.18.0-348.el8.x86_64.img

There are some minor differences here particularly these two that are
only present in the tboot entry:

  intel_iommu=on
  noefi

The first doesn't seem likely to be involved - if forcing the IOMMU on
did anything to affect this I would expect it to break the SCSI driver
and prevent the disks from being disovered, but we see the sd log
messages in the tboot case so that isn't happening.

The noefi is a bit more interesting - a lot of modern systems ship the
motherboard RAID configuration tools as an EFI application now, and I
wonder if forcing EFI off with noefi is somehow breaking the discovery
of the imsm RAID set? The full dmesg for the two cases might give some
more hints about this.

My other possible guess was to check whether the initramfs image for the
tboot case was missing MD support, however from the above it looks as
though the two entries are using the same image (one has a ($root)
prefix, but that's where grub should look for / anyway).

Regards,
Bryn.

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux