On 18/07/2023 10:25, Yu Kuai wrote:
Hi,
在 2023/07/14 4:58, Leslie Rhorer 写道:
I have a corrupted bootable RAID 1 array on a pair of SDD drives,
and I fear I need some assistance. Actually, there are four
partitions on each drive, and two RAID 1 arrays were assembled from
each drive. When working properly, the second pair of partitions were
mounted as / and the first pair were mounted as /boot. The OS is
Debian Buster. When I attempt to boot the system, it goes directly to
the GRUB prompt.
I pulled the drives and attached them to an active system. Mdadm
reports both partition tables to be intact with partition labels of
fd, 83, ef, and 82, respectively and MBR magic of aa55. When I try to
assemble any array says the partitions exist but are not md arrays.
Fdisk reports the partition types as Linux raid autodetect,
Linux, EFI, and Linux swap / Solaris. The EFI partitions are marked
bootable and contain the following:
total 3472
-rwxr-xr-x 1 root root 108 May 28 2022 BOOTX64.CSV
-rwxr-xr-x 1 root root 84648 May 28 2022 fbx64.efi
-rwxr-xr-x 1 root root 152 May 28 2022 grub.cfg
-rwxr-xr-x 1 root root 1672576 May 28 2022 grubx64.efi
-rwxr-xr-x 1 root root 845480 May 28 2022 mmx64.efi
-rwxr-xr-x 1 root root 934240 May 28 2022 shimx64.efi
Any suggestions? I should say that the running system
(Bullseye) is not the same version as the failed one (Buster). Of
course the failed system does need to be upgraded, but there are
specific reasons why this is quite undesirable at this point.
.
There really is not enough information, what is the kernel version?
And in the active system, what mdadm cmds you're using and what's the
result? (And please show us the result of mdadm -E /dev/[partition]).
@kuai
Point them at the wiki ...
https://raid.wiki.kernel.org/index.php/Linux_Raid#When_Things_Go_Wrogn
Cheers,
Wol