Check messages file and see if it has in the last few weeks reporting sectors bad. Or do a dd if=/dev/sda of=/dev/null read test until it hits something, then correct it, then continue on. Or do repeated long/selective tests to see if you can find them. Though, I had a seagate disk that I was able to get all of the pending to be fixed, I had to remove the disk from the raid as it still would randomly pause for 7 seconds while reading sectors that were not yet classified as pending. I tried a number of things to try to get the disk to behave and/or replace those bad sectors, but finally gave up on that disk and just replaced it (out of warranty) as I could not ever get it to behave right. On Sat, Jun 7, 2014 at 7:52 PM, Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote: > I wrote: >> How can there still be pending bad sectors, and yet no error and LBA reported? > > So I started another -t long test. And it comes up with an LBA not previously reported. > > # 1 Extended offline Completed: read failure 60% 1214 430234064 > > # dd if=/dev/zero of=/dev/sda seek=430234064 count=8 > dd: writing to '/dev/sda': Input/output error > 1+0 records in > 0+0 records out > 0 bytes (0 B) copied, 3.63342 s, 0.0 kB/s > > On this sector the technique fails. > > # dd if=/dev/zero of=/dev/sda seek=430234064 count=8 oflag=direct > 8+0 records in > 8+0 records out > 4096 bytes (4.1 kB) copied, 3.73824 s, 1.1 kB/s > > This technique works. > > However, this seems like a contradiction. A complete -t long results in: > > # 1 Extended offline Completed without error 00% 1219 - > > and yet > > 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 16 > > How are there 16 pending sectors, with no errors found during the extended offline test? In order to fix this without SMART reporting the affected LBAs, I'd have to write to every sector on the drive. This seems like bad design or implementation. > > Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html