hi Joachim Otahal: ths for your test on "Debian 2.6.26-21lenny4". if you want to see the oop ,you should always write to the raid5 ,and pull 2 disks out.maybe you can see the error i think no matter what i do ,even if i pull out all the disk , kernel should not oop. On Sat, Mar 20, 2010 at 2:37 AM, Joachim Otahal <Jou@xxxxxxx> wrote: > Kristleifur Dağason schrieb: >> >> On Fri, Mar 19, 2010 at 6:20 PM, Joachim Otahal <Jou@xxxxxxx >> <mailto:Jou@xxxxxxx>> wrote: >> >> jin zhencheng schrieb: >> >> hi; >> >> i use kernel is 2.6.26.2 >> >> what i do as follow: >> >> 1, I create a raid5: >> mdadm -C /dev/md5 -l 5 -n 4 /dev/sda /dev/sdb /dev/sdc /dev/sdd >> --metadata=1.0 --assume-clean >> >> 2, dd if=/dev/zero of=/dev/md5 bs=1M& >> >> write data to this raid5 >> >> 3, mdadm --manage /dev/md5 -f /dev/sda >> >> 4 mdadm --manage /dev/md5 -f /dev/sdb >> >> if i faild 2 disks ,then the OS kernel display OOP error and >> kernel down >> >> do somebody know why ? >> >> Is MD/RAID5 bug ? >> >> >> RAID5 can only tolerate ONE drive to fail of ALL members. If you >> want to be able to fail two drives you will have to use RAID6 or >> RAID5 with one hot-spare (and give it time to rebuild before >> failing the second drive). >> PLEASE read the documentation on raid levels, like on wikipedia. >> >> >> That is true, >> >> but should we get a kernel oops and crash if two RAID5 drives are failed? >> (THAT part looks like a bug!) >> >> Jin, can you try a newer kernel, and a newer mdadm? >> >> -- Kristleifur > > You are probably right. > My kernel version is "Debian 2.6.26-21lenny4", and I had no oopses during my > hot-plug testing one the hardware I use md on. I think it may be the driver > for his chips. > > Jin: > > Did you really use the whole drives for testing or loopback files or > partitions on the drives? I never did my hot-plug testings with whole drives > being in an array, only with partitions. > > Joachim Otahal > > -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html