Re: I/O error reading from raid 1 device but not slave devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 30 Jun 2015 10:46:38 -0400 Nate Clark <nate@xxxxxxxxxx> wrote:

> On Mon, Jun 29, 2015 at 5:35 PM, Nate Clark <nate@xxxxxxxxxx> wrote:
> > Hello,
> >
> > I have encountered a strange error while reading from a raid 1 device.
> > If I read from the md device I encounter an I/O error, however if I
> > read from the underlying devices there is no issue.
> 
> It appears both drives in the array have identical bad blocks list. I
> am not sure why any blocks were marked bad since I don't see any drive
> I/O errors in the logs and the smart output from each drive shows they
> are healthy.

That was my guess, but you confirmed before I got around to posting :-)

One way you could get bad blocks on perfectly healthy drives is if you
previously had an unhealthy drive.
Imagine a degraded RAID1 with a drive that has a couple of bad blocks.
You add a spare, it recovers but as it cannot read the bad blocks, it
adds them to the badblock list on the new device.
Then you remove the sick device, add a brand new one, and rebuild it -
from the first spare you added.
It will get the bad blocks "copied" onto it as well.

The blocks will stay 'bad' until something is written to them.  Then
they will become good.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux