On Tue Feb 05, 2013 at 12:20:48PM +0000, Brian Candler wrote: > (Ubuntu 12.04.2, kernel 3.2.0-37-generic) > > I created a RAID5 array with 22 data disks and 2 hot spares, like this: > > # mdadm --create /dev/md/dbs -l raid5 -n 22 -x 2 -c 512 -b internal /dev/sd{b..y} > > However I'm having difficultly understanding the mdstat output. > > # cat /proc/mdstat > Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] > md127 : active raid5 sdw[24] sdy[23](S) sdx[22](S) sdv[20] sdu[19] sdt[18] sds[17] sdr[16] sdq[15] sdp[14] sdo[13] sdn[12] sdm[11] sdl[10] sdk[9] sdj[8] sdi[7] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1] sdb[0] > 61532835840 blocks super 1.2 level 5, 512k chunk, algorithm 2 [22/21] [UUUUUUUUUUUUUUUUUUUUU_] > [=>...................] recovery = 6.0% (176470508/2930135040) finish=706.9min speed=64915K/sec > bitmap: 0/22 pages [0KB], 65536KB chunk > > unused devices: <none> > # > > Problems: > > 1. The UUUU_ and [22/21] suggests that one disk is bad, but is that true? > And if so which one? > No, that's normal. A RAID5 (or RAID6) array is created in a degraded form, then the last disk(s) are recovered (it's the quickest way of getting the array ready for use). > Output from "dmesg | grep -3 sd" is at end of this mail, and it doesn't show > any errors. > > All the disks have the same event counter in the metadata: > > # for i in /dev/sd{b..y}; do mdadm --examine $i | grep Events; done | sort | uniq -c > 24 Events : 594 > > 2. /proc/mdstat shows the member disks numbered 0..20 and 22..24, what > happened to 21 ? > 21 would (I think) be the "missing" one from the original array creation (with 22..24 as the spares). The numbers themselves don't really signify anything. HTH, Robin -- ___ ( ' } | Robin Hill <robin@xxxxxxxxxxxxxxx> | / / ) | Little Jim says .... | // !! | "He fallen in de water !!" |
Attachment:
pgpAhWI1PMLGq.pgp
Description: PGP signature