Re: RAID5 Recovery - superblock lost after reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/08/2024 at 13:51, David Alexander Geister wrote:

There have been reports lately which seem to indicate that something, maybe the BIOS/UEFI firmware, "restores" the primary partition table from an existing backup partition table at boot.

I did not enter or actively interact with the UEFI of my mainboard in any way.

The GPT "repair" seems to be automatic, just like mounting an ext4 filesystem automatically replays the journal if needed.
"The road to hell is paved with good intentions".

Is there a list of the reports where I can check if my mainboard is affected? Is there something I could do/contribute?

I only read a few reports on this list and do not remember that the mainboard model was mentioned.

Otherwise, I suggest that you erase all GPT metadata on each disk with wipefs -a before re-creating the RAID array with --assume-clean. When re-creating the array, make sure that sda, sdb and sdc are in the same physical order as when you originally created the RAID array (check with the serial numbers).

There is indeed data on the drives that I would like to access. As I did not change the physical order of the drives, I'm going to give it a go.

I was not clear enough. /dev/sd* device names are not guaranteed to be stable and may be assigned to different physical disks at each boot.

Is there any recommendations from you,

After you re-created the array, I recommend that you check it with e2fsck -fn first then mount it read-only and check the contents in case you re-created it with the wrong disk order.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux