As I recall, at least in my U_ situations, when an array goes U_, the 'failed' disk is no longer addressable at all, until a reboot.. but next time it happens I'll try after reboot reading the entire surface before re-writing it to see if that picks up any errors. I could see how a read would fail, until a disk was told to write, then the whole surface would work again.. if this is common behavior for disks would that perhaps be something the raid code could recognize and work around? -Justin On Wed, Mar 20, 2002 at 01:50:49PM +0100, Jakob Østergaard wrote: > On Tue, Mar 19, 2002 at 06:18:36PM -0500, Justin wrote: > > FWIW i get the same thing .. > > > > some of my raid1 arrays tend to become U_ after a few months > > of light use. Rebooting the box allows the device to be > > addressable again, and the disk is not, in fact, bad .. > > > > I can do a complete dd to the "bad" disk without error, then > > raidhotadd it back in again as well. A few months later of > > uptime, it is U_ again.. > > Can you complete a dd *from* the bad disk ? > > It's common to see bad blocks on a disk, the raid dropping the > disk, and then after a full dd to the disk everything is back > to normal. This happens because the disk can re-allocate the > bad blocks during writes, whereas a read from a bad block > will fail. > > -- > ................................................................ > : jakob@unthought.net : And I see the elder races, : > :.........................: putrid forms of man : > : Jakob Østergaard : See him rise and claim the earth, : > : OZ9ABN : his downfall is at hand. : > :.........................:............{Konkhra}...............: - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html