Raid6 Root crashed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
 
i have a 6x3TB SW Raid6 under Debian Wheezy. md0 is for the system, md1 is for swap and md2 for files. While i copy some files from a 8disk to the md2 i got a kernel panic and the system halts. i have to shut it down on the hard way. After that i reboot and got a problem with md0 and the system wont boot.
 
Mesage:

md0: cannot start dirty degraded array
md0: failed to run raid set
failed to run_array md0 inpu/output error
...
md0 is already in use
gave up waiting for root device common problems...
alert! dev/disk/by-uuid/eb8r..... does not exist.
dropping to a shell!
cant access tty job control turned off
((initframs)

proc/mdstat:

md0: inactive sda2[0] sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1]

So Partition sdg2 is missing. SDG was the last i added to the raid.

mdadm detail shows:

Raid Devices: 7
Total Devices: 6
Active D: 6
Working D: 6
Failed D: 0
Spare D: 0
Number
0...active sync sda2
...
5...active sync sdf2
6 removed 

This shows me that sdg2 is missing.
I do an examine on sdg2 and it shows no problems, superblock is persistant. The UUID's for all 3 mds are correct in the mdadm.conf. I am unsure what to do next. I think it must be a mdadm --assemble --force /dev/md0 /dev/sda2 /dev/sdb2 /dev/sdc2 /dev/sdd2 /dev/sde2 /dev/sdf2 missing.
 
Sorry but i cant connect via ssh and must write the lines from screen. What should i do next?
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux