On Wed, May 15, 2013 at 5:24 PM, Roy Sigurd Karlsbakk <roy@xxxxxxxxxxxxx> wrote: > - Anything in dmesg? Nope. > - What does /etc/mdadm/mdadm.conf say? Good call. It was generated automatically when there were 4 spares, so it had 'spares=4' in it. And there are no longer 4 spares, which can explain the message. I have now removed the 'spares=4' and restarted mdadm --monitor. Tomorrow we will see if that fixed it. > Also, using 20 disks in a single RAID-6 gives you the same chances for a parity+1 error (or worse) as compared to 10 drives in RAID-5. I would really recommend using smaller (8+2?) RAID-6 sets and rather use LVM on top (which you may be doing already?). Even with proper cooling and enterprise drives, 20 drives in a single RAID-6 is asking for trouble… We are migrating to a RAID60 2x10. The major reason for this is the time to rebuild: To rebuild one of the 19 drives we have to read remaining 19 drives during which the performance will be slower. On our system rebuild would take at least 4 days. /Ole -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html