Raid starts dirty every boot on Ubuntu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On my Ubuntu server, ALL my raids start dirty every single time and kicks
one drive (any one of the four), re-adds it, then reconstructs the raid
array. For the example below, this is a raid1 with two spares.  When it
finishes booting, the device that was kicked is still in the array, as a
spare. This makes the boot literally take hours, and I don?t know exactly
what it?s doing since it just sits there without a progress indicator. I
don?t think it?s fsck because it doesn?t even get to fsck yet in the
runlevel. It goes like this:

md: md7 stopped.
md: unbind<sda9>
md: export_rdev(sda9)
md: ubind<sdb9>
md: export_rdev(sdb9)
md: bind<sdc9>
md: bind<sdb9>
md: bind<sdd9>
md: bind<sda9>
md: Kicking non-fresh sdd9 from array!
md: unbind<sdd9>
md: export_rdev(sdd9)
raid1: raid set md7 active with 2 out of 2 mirrors


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux