Re: Missing Superblocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Apologies I feel like I am being extraordinarily thick!
I am trying to --assemble --force, I have tried listing the devices in the correct order after /dev/md0. To which I get

       sudo mdadm --assemble --force /dev/md0 /dev/sdc /dev/sdd
       /dev/sde /dev/sdf /dev/sdg /dev/sdh
       mdadm: /dev/sdd, is an invalid name for an md device - ignored.
       mdadm: /dev/sde, is an invalid name for an md device - ignored.
       mdadm: /dev/sdf, is an invalid name for an md device - ignored.
       mdadm: /dev/sdg, is an invalid name for an md device - ignored.
       mdadm: /dev/sdh is an invalid name for an md device - ignored.
       mdadm: No super block found on /dev/sdd (Expected magic
       a92b4efc, got 00000000)
       mdadm: no RAID superblock on /dev/sdd
       mdadm: /dev/sdd has no superblock - assembly aborted

I have tried listing the devices in the mdadm.config  again with no luck.

       ARRAY /dev/md0 metadata=1.2 level=6 name=nas:0 devices=/dev/sdc,
       /dev/sdd, /dev/sde, /dev/sdf, /dev/sdg, /dev/sdh
       (NB:. both with spaces like above and with out space between the
       , ers)

To which I get.

       sudo mdadm --assemble --force /dev/md0
       mdadm: /dev/md0 assembled from 1 drive - not enough to start the
       array.

What am I missing?

On 27/10/2021 17:33, Wol wrote:
On 26/10/2021 10:45, John Atkins wrote:
Thanks for the suggestions.
No partition ever on these disks.

BAD IDEA ... it *should* be okay, but there are too many rogue programs/utilities out there that think stomping all over a partition-free disk is acceptable behaviour ...

It's bad enough when a GPT or MBR gets trashed, which sadly is not unusual in your scenario, but without partitions you're inviting disaster... :-(

I will try the dd method but as there was never a partition on the drive I don't think that will return results.

Why not? it may return traces of the array ...

The busy drive is not part of an active md array nor mounted so still a bit bemused by that.

When mdadm attempts to start an array (which it does by default at boot), if the attempt fails it usually leaves a broken inactive array in an unusable state. You need to "kill" this mess before you can do anything with it!

I know the order, after my first few muckups I number them to make sure if I have to move them it will work. If I use assume clean, if it does not work I can just try another order I assume. I do have a backup but 14T will take time to replicate.

If you haven't yet tried to force the array, and possibly corrupted where the headers should be, you could try a plain force-assemble, which *might* work (very long shot ...)

Otherwise, read the wiki and try with overlays until something "strikes gold". Then I'd be inclined to fail each drive in turn, re-adding it as a partition, to try and avoid a similar screw-up in future. That, or disconnect all the raid drives before an upgrade, and re-connect them afterwards - though that's been known to cause grief, too :-(

(Of course, if you've used all available space, partitioning will shrink the raid and cause more grief elsewhere ...)

Hopefully, you've never resized the array, and the mdadm defaults haven't changed, so you'll strike gold first attempt. Otherwise it could be a long hard slog with all the possible options.

https://raid.wiki.kernel.org/index.php/Linux_Raid

Cheers,
Wol
.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux