On Sun, Mar 20, 2016 at 10:44:57PM +0100, Patrick Tschackert wrote: > After rebooting the system, one of the harddisks was missing from my md raid 6 (the drive was /dev/sdf), so i rebuilt it with a hotspare that was already present in the system. > I physically removed the "missing" /dev/sdf drive after the restore and replaced it with a new drive. Exact commands involved for those steps? mdadm --examine output for your disks? > $ cat /sys/block/md0/md/mismatch_cnt > 311936608 Basically the whole array out of whack. This is what you get when you use --create --assume-clean on disks that are not actually clean... or if you somehow convince md to integrate a disk that does not have valid data on, for example because you copied partition table and md metadata - but not everything else - using dd. Something really bad happened here and the only person who can explain it, is probably yourself. Your best bet is that the data is valid on n-2 disks. Use overlay https://raid.wiki.kernel.org/index.php/Recovering_a_failed_software_RAID#Making_the_harddisks_read-only_using_an_overlay_file Assemble the overlay RAID with any 2 disks missing (try all combinations) and see if you get valid data. Regards Andreas Klauer -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html