On 3 October 2011 14:26, Marcin M. Jessa <lists@xxxxxxxxx> wrote: > Hi guys. > > After a rather long thread with many questions about my failed RAID array > I'm trying to give it another shot. > I replaced all the SATA cables and I want to stress test my array. > Short description of what it was used for when it failed: > I had a 5 drive (5x2TB Seagate Green Barracuda) RAID 6 array with LVM on top > of it. > - One of the LVs was serving as a samba storage > - One as a NFS exported storage with web sites > - 3 LVs had KVM hosts installed to them (heavy hammered web server, MySQL > server and mail/imap/pop3 server) > > The load seemed to have stressed my array/the HDs to the point when 3 of the > drives were kicked off the array resulting in loss of data. > It's hard to find a cause of it - some forum threads on the Interner suggest > it may be the kernel, some say it could be the SATA controller, the SATA > cables and most of them suggest it's because of the hard drives. > > Now I would like to stress test the array and see whether it would fail > again or not. What would be the best way to do that? > > > > > -- > > Marcin M. Jessa > -- > To unsubscribe from this list: send the line "unsubscribe linux-raid" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > Hi, I would run badblocks on the md0 device. (increase number of blocks to check at a time until you use all your available RAM) After that I'd run dd. I would also check the SMART data on all drives, and the health of the controller. /M -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html