Hello, On 11/24/2010 06:00 PM, Greg Freemyer wrote: >>>> HI, I'm trying to recover data from a damaged hard disk, which has >>>> plenty of bad sectors, but also has many good ones. The problem is that >>>> when a bad sector is found, the drive keeps trying to read it, instead >>>> of giving up and just move on, so the average data read rate is around >>>> 5Kb/s. With such rates, it will take more than an year to finish. Since >>>> I'm using gnu ddrescue (which logs bad sectors, so one can try then >>>> again later), my goal is not waste time with errors, leaving the retries >>>> to a second round. >>>> So, my first attempt was to drastically lower the timeouts in >>>> libata-eh.c. It seems to have improved a little, but I'm not having more >>>> than 12Kb/s. >>>> Is there any way to minimize retries and make errors finish faster? You can directly issue r/w commands using SGIO where you can control retry and timeout explicitly. Hmm... it might be a good idea to allow userland to set FAILFAST bit on a block device? -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html