On 10/25/2017 05:28 AM, Bjørnar Ness wrote: > On Oct 24, 2017 20:31, "Phil Turmel" <philip@xxxxxxxxxx> wrote: >> >> That sounds like writes to the write-intent bitmap, which happens to all >> devices in an array while degraded. >> >> Why do you have 12 spares attached to the array? That seems a bit >> excessive. > > This is a 60 disk array, and the only reason for having 12 disks as spares > (could be any number, really) is the frequency of visiting datacenter. Ok. > There are multiple arrays, and these spares are moved as they are needed, > but as far as I can tell, it is not easy if at all possible to > distribute slaves evenly > among arrays when using udev rules and --incremental to add them. If you are using spare groups to allow the spares to move where needed, then you don't actually need to have all of these spares attached to your arrays. Consider using just one or two. Then use a cron job or custom mdadm --monitor script to add one of the unattached spares to any array when an event consumes one of your active spares. Phil -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html