Re: md raid10 regression in 2.6.27.4 (possibly earlier)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thomas Backlund skrev:
Peter Rabbitson skrev:
Hi,

Some weeks ago I upgraded from 2.6.23 to 2.6.27.4. After a failed hard
drive I realized that re-adding drives to a degraded raid10 no longer
works (it adds the drive as a spare and never starts a resync). Booting
back into the old .23 kernel allowed me to complete and resync the array
as usual. Attached find a test case reliably failing on vanilla 2.6.27.4
with no patches.


I've just been hit with the same problem...

I have a brand new server setup with 2.6.27.4 x86_64 kernel and a mix of
raid0, raid1, raid5 & raid10 partitions like this:

And an extra datapoint.

Booting into 2.6.26.5 triggers an instant resync of the spare disks, so it means we have a regression between 2.6.26.5 and 2.6.27.4

If no-one have a good suggestion to try, I'll start bisecting tomorrow...
--
Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux