Re: raid0 fail to detect drive failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2005-11-02 at 12:08 +0300, Michael Tokarev wrote:
> Ming Zhang wrote:
> > Hi folks
> > 
> > I have a raid0 on top of 2 sata disk sda and sdb. after i hot unplug
> > sda, the raid0 still shows online and active. run dd to write to it will
> > fail and dmesg shows scsi io error. but /proc/mdstat shows everything is
> > ok.
> 
> Since raid0 isn't relly raid (as Redundrand) and can't really do
> anything with IO errors on component devices, this behaviour
> (returning IO errors to the application) is the only sane way
> to go.  It should not fail just like when your disk drive has
> a bad sector on it, the whole partition (or whole disk) with
> that bad sector isn't "marked as failed".  So what you see is
> exactly correct behaviour, in my opinion anyway.
> 
> /mjt

after I sent email, I read the raid0 code and there is no error handling
at all, so i knew why it looks like that. 

for my case, 2 disk raid0, 1 disk broke mean 50% sectors on a disk are
bad. i would like to call that disk a failed disk. and i bet you will
not use such disk any more even you call it not-failed disk. ;)

but as you said, raid0 is not a real raid, so maybe this is why no error
check here.

thanks!

Ming


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux