raid 10 only spares showing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



i came across this mailing lsit while checking this wiki page ; https://raid.wiki.kernel.org/index.php/RAID_Recover

i had a server with 3 raid arrays:
1. system; 2 disks, raid 1 (_U)
2. data; 4 disks, raid 10 (UUUU)
3. misc; 4 disks, raid 10 (UUU_)

all of a sudden, DATA array had 1 failed drive (_UUU)
and system: (_U)

whenever i tried to examine with details, the whole system became unresponsive, sysstat showed READ on one of the disks at MAX speed
as the system was unresponsive i had to reboot, which is when it went into "busy box/initramfs"
i asked the remote DC to install a new disk and isntall ubuntu over it, they did and at boot it still went into "busy box" even though first boot disk was the newly installed system.
they had to unplug all other disks, boot the system and then plug them back in.
naturally, the naming of these  disks changed due to this procces... 
now after a lot of troubleshooting, power recycling,etc...
i managed to salvage the "data" array and plugged it into another server.

i'm trying now to fix my "misc" array, executing mdadm --assemble --scan doesnt work.

here's al the information i could think of gathering, if you require any other info to better help me troubleshoot i'm all ears;

BLKIDs:
/dev/loop0: TYPE="squashfs" 
/dev/sr0: LABEL="Ubuntu 14.04 LTS amd64" TYPE="iso9660" 
/dev/sdi2: UUID="345c489c-63e3-4d58-c1c1-a2ac8ff8a6b6" UUID_SUB="0a694075-0174-a774-d294-4f8d20632dea" LABEL="ams:0" TYPE="linux_raid_member" 
/dev/sdi3: UUID="cbbd6bfa-0b94-1536-cfa5-9cdb802f0733" UUID_SUB="4677f928-7c2a-e260-cbbd-f6a92ce20b66" LABEL="ams:1" TYPE="linux_raid_member" 
/dev/sdj2: UUID="345c489c-63e3-4d58-c1c1-a2ac8ff8a6b6" UUID_SUB="6322d0bd-e465-6940-12ba-39760cf7707f" LABEL="ams:0" TYPE="linux_raid_member" 
/dev/sdj3: UUID="cbbd6bfa-0b94-1536-cfa5-9cdb802f0733" UUID_SUB="3d03d2f8-fe6c-1d6b-0817-3915d46a7b1c" LABEL="ams:1" TYPE="linux_raid_member" 
/dev/sdl2: UUID="ddd47847-9399-4265-8c63-f9cd74d201d3" TYPE="ext4" 
/dev/sdl3: UUID="6b6bf336-9e71-458a-9b16-6009b8e744d8" TYPE="swap" 
/dev/sdl4: UUID="2a171660-f5cf-40ab-8044-da3cb57bf1e9" TYPE="ext4" 
/dev/sdg1: LABEL="angh_HD02" UUID="189A640A9A63E32C" TYPE="ntfs" 
/dev/sdh1: LABEL="angh_HD03" UUID="B09CB4979CB45A14" TYPE="ntfs" 
/dev/sdf1: LABEL="angh_HD01" UUID="1E06443406440F69" TYPE="ntfs" 
/dev/sdc: UUID="6d9f8ebe-164e-c09f-c845-e08f8499e99c" UUID_SUB="58f5b920-76df-27f1-d68a-ae27f22425d2" LABEL="ams:2" TYPE="linux_raid_member" 
/dev/sdi4: UUID="6d9f8ebe-164e-c09f-c845-e08f8499e99c" UUID_SUB="081aca1e-684d-aa5a-29e9-5cc7a15fc189" LABEL="ams:2" TYPE="linux_raid_member" 
/dev/sdj4: UUID="6d9f8ebe-164e-c09f-c845-e08f8499e99c" UUID_SUB="109e6499-26e7-a967-dfe6-e721e6de1bc3" LABEL="ams:2" TYPE="linux_raid_member" 
/dev/sdk: UUID="6d9f8ebe-164e-c09f-c845-e08f8499e99c" UUID_SUB="bcf54b67-34b9-59f9-3e9a-6e7596fd6143" LABEL="ams:2" TYPE="linux_raid_member" 
/dev/md0: UUID="79a513a8-d101-4097-a72a-fc82ec00ef6e" TYPE="ext4" 



#output og mdadm --assemble --scanroot@ubuntu:/home/ubuntu# mdadm --assemble --scan
mdadm: Devices UUID-6d9f8ebe:164ec09f:c845e08f:8499e99c and UUID-6d9f8ebe:164ec09f:c845e08f:8499e99c have the same name: /dev/md/2
mdadm: Duplicate MD device names in conf file were found.

---#after commenting the first mdam.conf record for md2
root@ubuntu:~# mdadm --assemble --scan
mdadm: superblock on /dev/sdc doesn't match others - assembly aborted
mdadm: /dev/md/0 has been started with 1 drive (out of 2).
mdadm: /dev/md/1 assembled from 0 drives and 2 spares - not enough to start the array.
mdadm: superblock on /dev/sdc doesn't match others - assembly aborted
mdadm: /dev/md/1 is already in use.

#mdadm.conf (livecd)
# definitions of existing MD arrays
ARRAY /dev/md/2 metadata=1.2 UUID=6d9f8ebe:164ec09f:c845e08f:8499e99c name=ams:2
   spares=1
ARRAY /dev/md/0 metadata=1.2 UUID=345c489c:63e34d58:c1c1a2ac:8ff8a6b6 name=ams:0
ARRAY /dev/md/1 metadata=1.2 UUID=cbbd6bfa:0b941536:cfa59cdb:802f0733 name=ams:1
   spares=2
ARRAY /dev/md/2 metadata=1.2 UUID=6d9f8ebe:164ec09f:c845e08f:8499e99c name=ams:2
   spares=3


#/proc/mdstat (live cd)
Personalities : [raid1] 
md1 : inactive sdj3[1](S) sdi3[0](S)
      292966703 blocks super 1.2
       
md0 : active raid1 sdj2[1]
      292957 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

#output of mdadm -esvvv: (live cd)
http://pastebin.com/gua8sucb ;
#had to use pastebin as output is a bit big


#output of bootinfoscript which is a handy tool i found online:
http://pastebin.com/7qdZURmv
#Had to use pastebin as output is a big big



 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux