failing a drive while RAID5 is initializing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi guys,

I just wanted to make sure that the behaviour I observed is as expected. With kernel 2.6.35.11, under Debian Lenny, I created a RAID5 array with 5 drives, partitioned it, formatted a partition with ext3 and mounted it. Then, I put some load onto the filesystem with:

dd if=/dev/urandom of=/mnt/testfile

The array started initializing. At that point, I needed to fail and replace a drive for some unrelated testing, so I did that with:

mdadm /dev/md0 -f /dev/sdc

The result was a broken filesystem which was remounted read-only, and a bunch of errors in dmesg. Theoretically, one would imagine that failing a drive on a RAID5 even during initialization should render the array without redundancy but workable. Am I wrong? Is there something special about the initialization stage of RAID5 that makes drive failure fatal during the initialization? If not, then I have a bug to report and I'll try to reproduce it for you.

If initialisation is special, does that mean that when creating RAID5, it is advisable to *wait* until the array has fully initialized before using it, otherwise one risks losing any data that was put onto the array during the initialization phase if a drive fails at that point?

Many thanks for any input,
Iordan Iordanov
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux