Sorry for the delay of my reply... This small mail to let you know that my RAID array is currently recovering thanks to the valuable inputs of this mailing list users. You are great! For the curious, what I did is the following: # ##### Do not forget the '--assume-clean' as I almost did! ;-( # mdadm -C /dev/md2 -l 10 -n 4 -c 64 -e 0.90 --assume-clean /dev/sdd1 missing /dev/sdc1 missing # vgchange -a y # xfs_repair -n -t 1 -v /dev/my-vg/my-lv # mount -o ro /dev/my-vg/my-lv /mnt/tmp # find /mnt/tmp # du -ks /mnt/tmp/ # umount /mnt/tmp # #### Required: XFS asked the log to get replayed # mount /dev/my-vg/my-lv /mnt/tmp/ # umount /mnt/tmp # xfs_repair -t 1 -v /dev/my-vg/my-lv # mdadm --manage /dev/md2 --add /dev/sde1 # mdadm --manage /dev/md2 --add /dev/sdf1 The array is currently at 25 % of the recovery process. A bit too soon to say that everything is fine... By the way, I am quite sure now that my USB controllers (or the use driver or whatever in the chain except all disks) are buggy: all the other RAIDs of my setup are gone! I will try to recover them using the same kind of process, to backup all data. Do you think that using BBR (since each time, the burden started due to a sector (write?) error), the problem will be "solved" (or at least postponed until BBR itself does not have enough free sectors)? Anyway, again, thanks a lot to all of you. Open Source rocks! ;-) -- Pierre Vignéras -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html