Re: Requesting help recovering my array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(Sorry if this came through twice without the mdadm.conf contents, somehow I accidentally hit send when I was trying to paste in.)

Thanks.  All drives in the system are being detected (/dev/sdd is my system drive - the rest are all of the array):

rj@jackie:~$ ls -l /dev/sd*
brw-rw---- 1 root disk 8,  0 Jan 21 19:08 /dev/sda
brw-rw---- 1 root disk 8,  1 Jan 21 19:08 /dev/sda1
brw-rw---- 1 root disk 8, 16 Jan 21 19:08 /dev/sdb
brw-rw---- 1 root disk 8, 17 Jan 21 19:08 /dev/sdb1
brw-rw---- 1 root disk 8, 32 Jan 21 19:08 /dev/sdc
brw-rw---- 1 root disk 8, 48 Jan 21 19:08 /dev/sdd
brw-rw---- 1 root disk 8, 49 Jan 21 19:08 /dev/sdd1
brw-rw---- 1 root disk 8, 50 Jan 21 19:08 /dev/sdd2
brw-rw---- 1 root disk 8, 53 Jan 21 19:08 /dev/sdd5
brw-rw---- 1 root disk 8, 54 Jan 21 19:08 /dev/sdd6
brw-rw---- 1 root disk 8, 55 Jan 21 19:08 /dev/sdd7
brw-rw---- 1 root disk 8, 56 Jan 21 19:08 /dev/sdd8
brw-rw---- 1 root disk 8, 64 Jan 21 19:08 /dev/sde
brw-rw---- 1 root disk 8, 65 Jan 21 19:08 /dev/sde1
brw-rw---- 1 root disk 8, 80 Jan 21 19:08 /dev/sdf
brw-rw---- 1 root disk 8, 81 Jan 21 19:08 /dev/sdf1
brw-rw---- 1 root disk 8, 96 Jan 21 19:08 /dev/sdg
brw-rw---- 1 root disk 8, 97 Jan 21 19:08 /dev/sdg1


The devices are not listed in the mdadm.conf, nor were they ever.  Here's everything (except the initial header comments about updating initramfs and all) from that file:

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR rj

# definitions of existing MD arrays
#ARRAY /dev/md/0  metadata=1.2 UUID=74a11272:9b233a5b:2506f763:27693ccc name=jackie:0

# This configuration was auto-generated on Wed, 27 Nov 2019 15:53:23 -0500 by mkconf
ARRAY /dev/md0 metadata=1.2 spares=1 name=jackie:0 UUID=74a11272:9b233a5b:2506f763:27693ccc


I assume that last line was added when I added the spare drive.  Should I add the drives to the mdadm.conf then run the assemble command you suggested?

It's like mdadm was assembling them automatically upon bootup, but that stopped working with the new motherboard for some reason.

Thanks.
--RJ






On Tuesday, January 23, 2024 at 11:06:30 AM EST, David Niklas <simd@xxxxxxxxxxx> wrote: 





Hello,

As someone who's a bit more experienced in RAID array failures, I'd like
to suggest the following:

# Check that all drives are being detected.
ls /dev/sd*

# Verify what exactly is being scanned.
grep DEVICE /etc/mdadm/mdadm.conf

Assuming both of these give satisfactory results*, your next step would
be to try assembling them out of order and see what happens. For example:

-> mdadm --assemble /dev/md0 /dev/sda /dev/sdb
Mdadm: Error Not part of array /dev/sdb
-> mdadm --assemble /dev/md0 /dev/sda /dev/sdc
Mdadm: Error too few drives to start array /dev/md0

Please note that I made up what mdadm is saying there. But it still tells
you what's going on.
* for the ls command you should see all the drives you have. For the grep
command you should get a listing like "/dev/sda /dev/sdb"... Obviously,
all the drives that might have a RAID array on them should be listed.


Sincerely,
David





On Tue, 23 Jan 2024 01:52:31 +0000 (UTC)
RJ Marquette <rjm1@xxxxxxxxx> wrote:
> I meant to add that my /proc/mdstat looked much more like yours on the
> old system.  But nothing is showing on this one. 
> 
> I may try swapping back to the old motherboard.  Another possibility
> that might be factor - UEFI vs Legacy BIOS.
> 
> Thanks.
> --RJ
> 
> 
> On Monday, January 22, 2024 at 07:45:29 PM EST, RJ Marquette
> <rjm1@xxxxxxxxx> wrote: 
> 
> 
> 
> 
> 
> That's all.  
> 
> If I run:
> 
> root@jackie:~# mdadm --assemble --scan
> mdadm: /dev/md0 assembled from 0 drives and 1 spare - not enough to
> start the array.
> 
> root@jackie:~# cat /proc/mdstat  
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10] unused devices: <none>
> 
> root@jackie:~# ls -l /dev/md*
> ls: cannot access '/dev/md*': No such file or directory
> 
> It seems to be recognizing the spare drive, but not the 5 that actually
> have data, for some reason.
> 
> Thanks.
> --RJ
> 
> 
> 
> 
> 
> 
> 
> 
> On Monday, January 22, 2024 at 06:49:50 PM EST, Reindl Harald
> <h.reindl@xxxxxxxxxxxxx> wrote: 
> 
> 
> 
> 
> 
> 
> 
> Am 22.01.24 um 23:13 schrieb RJ Marquette:
> > Sorry!
> > 
> > rj@jackie:~$ cat /proc/mdstat
> > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> > [raid4] [raid10] unused devices: <none>  
> 
> that's all and where is the ton of raid-types coming from with no
> single array shown?
> 
> [root@srv-rhsoft:~]$ cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sdb2[2] sda2[0]
>       30740480 blocks super 1.2 [2/2] [UU]
>       bitmap: 0/1 pages [0KB], 65536KB chunk
> 
> md1 : active raid1 sda3[0] sdb3[2]
>       3875717120 blocks super 1.2 [2/2] [UU]
>       bitmap: 5/29 pages [20KB], 65536KB chunk
> 
> 
> unused devices: <none>
> 
> > On Monday, January 22, 2024 at 04:55:50 PM EST, Reindl Harald
> > <h.reindl@xxxxxxxxxxxxx> wrote:
> > 
> > a ton of "mdadm --examine" outputs but i can't see a
> > "cat /proc/mdstat"
> > 
> > /dev/sdX is completly irrelevant when it comes to raid - you can even
> > connect a random disk via USB adapter without a change from the view
> > of the array
> > 
> > Am 22.01.24 um 20:52 schrieb RJ Marquette:  
> >> Hi, all.  I have a Raid5 array with 5 disks in use and a 6th in
> >> reserve that I built using 3TB drives in 2019.  It has been running
> >> fine since, not even a single drive failure.  The system also has a
> >> 7th hard drive for OS, home directory, etc.  The motherboard had
> >> four SATA ports, so I added an adapter card that has 4 more ports,
> >> with three drives connected to it.  The server runs Debian that I
> >> keep relatively current.
> >>
> >> Yesterday, I swapped a newer motherboard into the computer (upgraded
> >> my desktop and moved the guts to my server).  I never disconnected
> >> the cables from the adapter card (whew, I think), so I know which
> >> four drives were connected to the motherboard.  Unfortunately I
> >> didn't really note how they were hooked to the motherboard (SATA1-4
> >> ports).  Didn't even think it would be an issue.  I'm reasonably
> >> confident the array drives on the motherboard were sda-sdc, but I'm
> >> not certain.
> >>
> >> Now I can't get the array to come up.  I'm reasonably certain I
> >> haven't done anything to write to the drives - but mdadm will not
> >> assemble the drives (I have not tried to force it).  I'm not
> >> entirely sure what's up and would really appreciate any help.
> >>
> >> I've tried various incantations of mdadm --assemble --scan, with no
> >> luck.  I've seen the posts about certain motherboards that can mess
> >> up the drives, and I'm hoping I'm not in that boat.  The "new"
> >> motherboard is a Asus Z96-K/CSM.
> >>
> >> I assume using --force is in my future...I see various pages that
> >> say use --force then check it, but will that damage it if I'm
> >> wrong?  If not, how will I know it's correct?  Is the order of
> >> drives important with --force?  I see conflicting info on that.
> >>
> >> I'm no expert but it looks like each drive has the mdadm
> >> superblock...so I'm not sure why it won't assemble.  Please help!
> >>
> >> Thanks in advance.
> >> --RJ
> >>
> >> root@jackie:~# uname -a
> >> Linux jackie 5.10.0-27-amd64 #1 SMP Debian 5.10.205-2 (2023-12-31)
> >> x86_64 GNU/Linux
> >>
> >> root@jackie:~# mdadm --version
> >> mdadm - v4.1 - 2018-10-01
> >>
> >> root@jackie:~# mdadm --examine /dev/sda
> >> /dev/sda:   MBR Magic : aa55
> >> Partition[0] :   4294967295 sectors at            1 (type ee)
> >>
> >> root@jackie:~# mdadm --examine /dev/sda1
> >> mdadm: No md superblock detected on /dev/sda1.
> >>
> >> root@jackie:~# mdadm --examine /dev/sdb
> >> /dev/sdb:   MBR Magic : aa55
> >> Partition[0] :   4294967295 sectors at            1 (type ee)
> >>
> >> root@jackie:~# mdadm --examine /dev/sdb1
> >> mdadm: No md superblock detected on /dev/sdb1.
> >>
> >> root@jackie:~# mdadm --examine /dev/sdc
> >> /dev/sdc:          Magic : a92b4efc        Version : 1.2
> >> Feature Map : 0x0
> >> Array UUID : 74a11272:9b233a5b:2506f763:27693ccc
> >> Name : jackie:0  (local to host jackie)
> >> Creation Time : Sat Dec  8 19:32:07 2018
> >> Raid Level : raid5
> >> Raid Devices : 5 Avail
> >> Dev Size : 5860271024 (2794.39 GiB 3000.46 GB)
> >> Array Size : 11720540160 (11177.58 GiB 12001.83 GB)
> >> Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
> >> Data Offset : 262144 sectors
> >> Super Offset : 8 sectors
> >> Unused Space : before=261864 sectors, after=944 sectors
> >> State : clean
> >> Device UUID : a2b677bb:4004d8fb:a298a923:bab4df8a
> >> Update Time : Fri Jan 19 15:25:37 2024
> >> Bad Block Log : 512 entries available at offset 264 sectors
> >> Checksum : 2487f053 - correct
> >> Events : 5958
> >> Layout : left-symmetric
> >> Chunk Size : 512K
> >> Device Role : spare
> >> Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
> >>
> >> root@jackie:~# mdadm --examine /dev/sdc1
> >> mdadm: cannot open /dev/sdc1: No such file or directory
> >>
> >> root@jackie:~# mdadm --examine /dev/sde
> >> /dev/sde:   MBR Magic : aa55
> >> Partition[0] :   4294967295 sectors at            1 (type ee)
> >>
> >> root@jackie:~# mdadm --examine /dev/sde1
> >> mdadm: No md superblock detected on /dev/sde1.
> >>
> >> root@jackie:~# mdadm --examine /dev/sdf
> >> /dev/sdf:   MBR Magic : aa55
> >> Partition[0] :   4294967295 sectors at            1 (type ee)
> >>
> >> root@jackie:~# mdadm --examine /dev/sdf1
> >> mdadm: No md superblock detected on /dev/sdf1.
> >>
> >> root@jackie:~# mdadm --examine /dev/sdg
> >> /dev/sdg:   MBR Magic : aa55
> >> Partition[0] :   4294967295 sectors at            1 (type ee)
> >>
> >> root@jackie:~# mdadm --examine /dev/sdg1
> >> mdadm: No md superblock detected on /dev/sdg1.
> >>
> >> root@jackie:~# lsdrv
> >> PCI [ahci] 00:1f.2 SATA controller: Intel Corporation 9 Series
> >> Chipset Family SATA Controller [AHCI Mode] ├scsi 0:0:0:0 ATA
> >>      ST3000VN007-2E41 {Z7317D1A} │└sda 2.73t [8:0] Partitioned (gpt)
> >> │ └sda1 2.73t [8:1] Empty/Unknown
> >> ├scsi 1:0:0:0 ATA      Hitachi HUS72403 {P8GSA1WR}
> >> │└sdb 2.73t [8:16] Partitioned (gpt)
> >> │ └sdb1 2.73t [8:17] Empty/Unknown
> >> ├scsi 2:0:0:0 ATA      Hitachi HUA72303 {MK0371YVGSZ9RA}
> >> │└sdc 2.73t [8:32] MD raid5 (5) inactive
> >> 'jackie:0' {74a11272-9b23-3a5b-2506-f76327693ccc} └scsi 3:0:0:0 ATA
> >>      ST32000542AS     {5XW110LY} └sdd 1.82t [8:48] Partitioned (dos)
> >> ├sdd1 23.28g [8:49] Partitioned (dos)
> >> {d94cc2c8-037a-49c5-8a1e-01bb47d78624} │└Mounted as /dev/sdd1 @ /
> >> ├sdd2 1.00k [8:50] Partitioned (dos)
> >> ├sdd5 9.31g [8:53] ext4 {6eb3b4d0-8c7f-4b06-a431-4c292d5bda86}
> >> │└Mounted as /dev/sdd5 @ /var
> >> ├sdd6 3.96g [8:54] swap {901cd56d-ef11-4866-824b-d9ec4ae6fe6e}
> >> ├sdd7 1.86g [8:55] ext4 {69ba0889-322b-4fc8-b9d3-a2d133c97e5e}
> >> │└Mounted as /dev/sdd7 @ /tmp
> >> └sdd8 1.78t [8:56] ext4 {4ed408d4-6b22-46e0-baed-2e0589ff41fb}
> >> └Mounted as /dev/sdd8 @ /home PCI [ahci]
> >>
> >> 06:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe
> >> 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11) ├scsi 6:0:0:0 ATA
> >>      Hitachi HUS72403 {P8G84LEP} │└sde 2.73t [8:64] Partitioned (gpt)
> >> │ └sde1 2.73t [8:65] Empty/Unknown
> >> ├scsi 7:0:0:0 ATA      ST3000VN007-2E41 {Z7317D46}
> >> │└sdf 2.73t [8:80] Partitioned (gpt)
> >> │ └sdf1 2.73t [8:81] Empty/Unknown
> >> └scsi 8:0:0:0 ATA      ST3000VN007-2E41 {Z7317JTX}
> >> └sdg 2.73t [8:96] Partitioned (gpt)
> >> └sdg1 2.73t [8:97] Empty/Unknown
> >>
> >> root@jackie:~# cat /etc/mdadm/mdadm.conf
> >>     # This configuration was auto-generated on Wed, 27 Nov 2019
> >>15:53:23 -0500 by mkconf
> >> ARRAY /dev/md0 metadata=1.2 spares=1 name=jackie:0
> >> UUID=74a11272:9b233a5b:2506f763:27693cccr  
> 





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux