Re: raid5 missing drive at boot question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Neil Brown wrote:
On Thursday September 22, orion@xxxxxxxxxxxxx wrote:

I created a new 8 disk raid5 set with no spares. Now, when I boot I end up with:


md3 : active raid5 sdp1[7] sdo1[6] sdn1[5] sdm1[4] sdl1[3] sdk1[2] sdi1[0]
      3418687552 blocks level 5, 64k chunk, algorithm 2 [8/7] [U_UUUUUU]

which I believe is a "degraded" array.  I then have to do:

mdadm -M -a /dev/md3 /dev/sdj1

to add in the missing disk.  I had to do this after it was created as well.

Why does this happen?  What am I not understanding?


I suspect that partition sdj1 isn't marked as 0xfd (auto-detect).

NeilBrown

Right on the money, thanks!


--
Orion Poplawski
System Administrator                   303-415-9701 x222
Colorado Research Associates/NWRA      FAX: 303-415-9702
3380 Mitchell Lane, Boulder CO 80301   http://www.co-ra.com
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux