Mark Lord wrote: >> Timeout on FLUSH_EXT. That's a bad sign. Patch to retry FLUSH is >> pending but at any rate FLUSH failure is often accompanied by loss of >> data and XFS is doing the right thing of giving up on it. > .. > > Tejun, are we *sure* that's really a timeout? > The status shows 0x40 "drive ready" there, aka. "command complete". Heh... on timeout, libata EH doesn't touch status register as some controllers lock the whole machine up on that, so the 0x40 is just the fill value libata used during qc initialization. It definitely requires clarification. > I have a client who is also seeing this exact scenario on 750GB drives, > using a patched SLES10 kernel (2.6.16 + libata from 2.6.18 or so). Hmm.. most of FLUSH timeouts I've seen are either a dying drive or bad PSU. There just isn't much which can go wrong from the driver side. IIRC, there was a problem when the unused part of TF is not cleared but that was the only one. > Smartctl output is clean (no logged errors), and the drives themselves > are fine after a reboot -- necessary since libata/scsi kicked the drive out > of the RAID array. > > Something strange is going on here. Any chance you can trick the client to hook up the drive to a separate PSU? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html