Re: Troubleshooting MD RAID assembly not working after upgrade to F39

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/27/23 22:48, pgnd wrote:
without seeing all the details, unfound superblocks aren't good.

But isn't the information `mdadm --examine` prints coming from the superblock stored on the device? The magic number, that this command reports, matches what was expected (a92b4efc). I can access that information for each and every of the component devices.

if you were assembling with the wrong metadata format, that'd be unsurprising.
but, it sounds like these_were_  working for you at some point.

Yes, they were working right until I upgraded from f37 to f39, which was mostly (actually only) hammering the SSD, which holds the root volume.


if you're hoping to explore/identify/repair any damage, there's this for a good start,

	https://raid.wiki.kernel.org/index.php/Recovering_a_damaged_RAID

this too,

	https://wiki.archlinux.org/title/RAID

i'd recommend subscribing asking at,

	https://raid.wiki.kernel.org/index.php/Asking_for_help

before guessing.  a much better place to ask than here.

even with good help from the list, i've had mixed luck with superblock
recovery
   -- best, when able to find multiple clean copies of backup superblocks
  on the array/drives.
   -- worst, lost it all

Thanks. I will ask for help on the mailing list. I actually got good hope since I'm able to manually assemble and access the data. But first I will do some reading up.


given the change in behavior, and the older metadata, i'd consider
starting fresh.
wiping the array & disks, scanning/mapping bad blocks, reformatting
& repartitioning, creating new arrays with lastest metadata, and
restoring from backup.

if you've still got good harwdare, should be good -- better than the
uncertainty.
yup, it'll take awhile. but, so might the hunt & repair process.

Yeah. It's currently still on the bottom of my list. If all else fails, I will have no other option I guess. At moment I'm a bit torn apart between wanting to understand what's going and wanting to be able to use my system again.

Going through all the information, I just discovered that the oldest of the arrays is close to ten years old and one of the disks has been part of the setup right from the start (Power_On_Hours: 80952). I've replaced disks whenever the need arose, never ran into trouble until now...

Thanks for sparring! Whatever the outcome, I'll report here and on discussion.
--
_______________________________________________
devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/devel@xxxxxxxxxxxxxxxxxxxxxxx
Do not reply to spam, report it: https://pagure.io/fedora-infrastructure/new_issue




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Users]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]

  Powered by Linux