On Wed, 25 Mar 2015 19:14:00 -0400 Wakko Warner <wakko@xxxxxxxxxxxx> wrote: > Firstly, I'm not in need of assistance, just looking for information. > > I had a system with 4x 500gb disks in raid 5. One drive (slot 2) was > kicked. I removed and reseated the drive (which is OK). During rebuild, it > hit a bad block on another drive (slot 1) which it kicked. Is it possible > that if there's no redundancy to not kick a drive if it has a bad block? Only if you have bad-block-logs enabled. This is a relatively new feature. > > In the end, my solution was to create a dm target using linear and zero as > needed (zero where the bad block was) then a snapshot target ontop of that > since there was no possibility to write to that section that was a zero > target. I had LVM on top of the raid and my /usr was the one in the bad > block. Fortunately, no files were in that bad block. I dumped the usr > volume elsewhere, removed all the mappings (md, dm, and lvm), assembled the > array again and dumped the volume back which corrected the bad sector. All > this was done using another installation. > > This system will be retired anyway so the data isn't really useful. But > having the experience is. > > On a side note, it seems that everytime I encounter a bad sector on a drive, > it's always 8 sectors. Does anyone know if hard drives have been 4k > sectored longer than AF drives? This disk is 512 physical according to > fdisk. I've even noticed this on IDE drives. > Linux tends to do IO in multiples of 4k so it is unlikely to report a smaller block. That may or may not be relevant for your particular experiences. NeilBrown
Attachment:
pgpaxiOCKLaGo.pgp
Description: OpenPGP digital signature