Re: Buffer I/O error on dev md5, logical block 7073536, async page read

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 30/10/16 16:43, Marc MERLIN wrote:
> And here isn't one good drive between the 2, the bad blocks are identical on
> both drives and must have happened at the same time due to those cable
> induced IO errors I mentionned.
> Too bad that mdadm doesn't seem to account for the fact that it could be
> wrong when marking blocks as bad and does not seem to give a way to recover
> from this easily....
> I'll do more reading, thanks.

Reading the list, I've picked up that somehow badblocks seem to get
propagated from one drive to another. So if one drive gets a badblock,
that seems to get marked as bad on other drives too :-(

Oh - and as for badblocks being obsolete, isn't there a load of work
being done on it at the moment? For hardware raid I believe, which
presumably does not handle badblocks the way Phil thinks all modern
drives do? (Not surprising - hardware raid is regularly slated for being
buggy and not a good idea, this is probably more of the same...)

Cheers,
Wol
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux