Hi Bill, > > What can you tell us about how that happened? When (if ever) was it > running, how was it created, etc, etc. It was running stably for a long time, with the raid5 being expanded from 1TB to 2. It recently had two spare drives added to it, and again was running fine, however a new kernel on the box provoked some nasty behaviour with the sata_nv driver. This left the raid not starting at boot-up, as devmapper had claimed one of the spares as it's own, so I started the array without that spare, which provoked a re-sync for reasons I'm not clear on. Having cleared the devmapper mapping on the other spare, it was added back in as a spare to the array. 15 minutes into the resync, however, the biox fell over, whether as a result of the array resync or something else I dont know, there's nothing suspicious in the logs. Which brings us to where we are today :( > > You could probably try some things like trying to start it read-only > using --force, but don't do that yet, if you get it wrong you WILL be > likely to lose data. > I was tempted to change the UUID on the nonstarting members, but the number of spares being different gave me pause. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html