Badblocks and degraded array.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Firstly, I'm not in need of assistance, just looking for information.

I had a system with 4x 500gb disks in raid 5.  One drive (slot 2) was
kicked.  I removed and reseated the drive (which is OK).  During rebuild, it
hit a bad block on another drive (slot 1) which it kicked.  Is it possible
that if there's no redundancy to not kick a drive if it has a bad block?

In the end, my solution was to create a dm target using linear and zero as
needed (zero where the bad block was) then a snapshot target ontop of that
since there was no possibility to write to that section that was a zero
target.  I had LVM on top of the raid and my /usr was the one in the bad
block.  Fortunately, no files were in that bad block.  I dumped the usr
volume elsewhere, removed all the mappings (md, dm, and lvm), assembled the
array again and dumped the volume back which corrected the bad sector.  All
this was done using another installation.

This system will be retired anyway so the data isn't really useful.  But
having the experience is.

On a side note, it seems that everytime I encounter a bad sector on a drive,
it's always 8 sectors.  Does anyone know if hard drives have been 4k
sectored longer than AF drives?  This disk is 512 physical according to
fdisk.  I've even noticed this on IDE drives.

-- 
 Microsoft has beaten Volkswagen's world record.  Volkswagen only created 22
 million bugs.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux