On Mon, Mar 11, 2013 at 1:12 AM, Mathias Burén <mathias.buren@xxxxxxxxx> wrote: >> Initially it didn't want to, and I was using mdadm --force. It started >> to rebuild after a few seconds, though. To my dismay it ended the same >> way. Only this time I went back through the logs and saw when was the >> first back trace: http://bpaste.net/raw/82819/ >> >> Here is my raid.status: http://bpaste.net/raw/82820/ >> >> I have read all the info in >> https://raid.wiki.kernel.org/index.php/RAID_Recovery#Restore_array_by_recreating_.28after_multiple_device_failure.29 >> and before I lose any chance of copying the data (most of it at least) >> trying to forcing a complete rebuild. >> >> I have 4.5 TB used and right now I have the filesystem mounted and I >> can use it yet the kernel is spiting that same trace over and over >> again. I really don't know what would be the best thing to do right >> now and would appreciate any help. > > So how are the drivers doing? smartctl -a for all HDDs please. http://bpaste.net/raw/82828/ -- Javier Marcet <jmarcet@xxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html