On Dec 4, 2020, at 7:30 AM, Paul Menzel <pmenzel@xxxxxxxxxxxxx> wrote: > > Using Debian Sid/unstable with 5.9.11 (5.9.0-4-686-pae), it looks like the last `sudo grub-update` installed modules with corrupted file names. `/boot` is mounted. > >> $ findmnt /boot >> TARGET SOURCE FSTYPE OPTIONS >> /boot /dev/md0 ext4 rw,relatime >> $ ls -l /boot/grub/i386-pc/ >> insgesamt 2085 >> -rw-r--r-- 1 root root 8004 13. Aug 23:00 '915resolution.mod-'$'\205\300''u'$'\023\211''鍓]'$'\206\371\377\211\360\350''f'$'\376\377\377\205\300''ur'$'\203\354\004''V'$'\377''t$'$'\030''j'$'\002''胒' >> -rw-r--r-- 1 root root 10596 13. Aug 23:00 'acpi.mod-'$'\205\300''u'$'\023\211''鍓]'$'\206\371\377\211\360\350''f'$'\376\377\377\205\300''ur'$'\203\354\004''V'$'\377''t$'$'\030''j'$'\002''胒' >> […] >> $ file /boot/grub/i386-pc/zstd.mod-��u^S�鍓\]�����f���ur��^DVt\$^Xj^B胒 /boot/grub/i386-pc/zstd.mod-��u�鍓]������f�����ur��V�t$j胒: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped > > Checking the file system returned no errors. > > $ sudo umount /boot > $ sudo fsck.ext4 /dev/md0 > e2fsck 1.45.6 (20-Mar-2020) > boot: sauber, 331/124928 Dateien, 145680/497856 Blöcke > > This causes GRUB fail to load the module, and it falls back into rescue mode. > > Any idea, what might have happened. It’s a degraded RAID, and I only use one drive since several years, but never deactivated it, and `/dev/md0` still shows up. > > ``` > $ more /proc/mdstat > Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] > md0 : active raid1 sdb1[0] > 497856 blocks [2/1] [U_] > > md1 : active raid1 sdb2[0] > 1953013952 blocks [2/1] [U_] > > unused devices: <none> > ``` Did you try downgrading to the previous kernel to see if that fixes the problem? Then, it would be useful to bisect between the old working kernel and the new broken kernel to see what introduced this bug. Cheers, Andreas
Attachment:
signature.asc
Description: Message signed with OpenPGP