Re: RAID10 status when you remove the first disk and last disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 20 Jul 2010 19:34:06 -0700
"Michael Li" <michael.li@xxxxxxxxxxx> wrote:

> Hi ,
> 
>  
> 
> I am using RAID10 far copy 2 by 4 disks, when I removed the first disk
> and last disk, I think this raid should be failed, but mdadm -D
> /dev/md2, I saw it was degraded, but when I try to read/write, there
> will be I/O error. And when I added the two disks again, the raid will
> be recovery, so that's normal? why it is designed like this?
> 
>  
> 
> Anyone know the detail about that? Thanks so much!
> 

Sounds like a bug.  You have definitely lost data if the first and last
devices of such an array go missing, so it should not try to recover a spare
in to either slot.

What kernel version are you using?

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux