how to start a degraded array that shows all members are spare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a thread with a full story, but I think that asking one question at a time will work better.

My array is degraded as one disk was sent for replacement.

The system failed (reason not important) and on restart I am in an emergency shell.
mdstat shows all members are spare.

The question is: what is the correct way to start the array when it is all spares (and degraded)?

This is what I did
==================
	mdadm --run /dev/md127
responds with
	says: cannot start dirty degraded array

mdadm --stop md127
mdadm --assemble /dev/md127 /dev/sd{b,c,d,e,f,g}1
	says: not clean -- starting background reconstruction
	says: cannot start dirty degraded array
	suggests to use --force

mdadm --assemble --force /dev/md127 /dev/sd{b,c,d,e,f,g}1
	starts the array.
In the system log I see
	md: requested-resync of RAID array md127
and mdstat shows a resync (maybe a different term?) in progress.

After a while I reboot and the system comes up but it has issues.

Now I see:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 sde1[4] sdc1[9] sdf1[5] sdb1[8] sdd1[7] sdg1[6]
      58593761280 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [_UUUUUU]
      bitmap: 88/88 pages [352KB], 65536KB chunk

Before this drama it said "bitmap: 0/88" if this matters.

TIA

--
Eyal at Home (eyal@xxxxxxxxxxxxxx)



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux