Hello, On 08/20/2010 10:26 AM, Kiyoshi Ueda wrote: > I think that's correct and changing the priority of DM_ENDIO_REQUEUE > for REQ_FLUSH down to the lowest should be fine. > (I didn't know that FLUSH failure implies data loss possibility.) At least on ATA, FLUSH failure implies that data is already lost, so the error can't be ignored or retried. > But the patch is not enough, you have to change target drivers, too. > E.g. As for multipath, you need to change > drivers/md/dm-mpath.c:do_end_io() to return error for REQ_FLUSH > like the REQ_DISCARD support included in 2.6.36-rc1. I'll take a look but is there an easy to test mpath other than having fancy hardware? > By the way, if these patch-set with the change above are included, > even one path failure for REQ_FLUSH on multipath configuration will > be reported to upper layer as error, although it's retried using > other paths currently. > Then, if an upper layer won't take correct recovery action for the error, > it would be seen as a regression for users. (e.g. Frequent EXT3-error > resulting in read-only mount on multipath configuration.) > > Although I think the explicit error is fine rather than implicit data > corruption, please check upper layers carefully so that users won't see > such errors as much as possible. Argh... then it will have to discern why FLUSH failed. It can retry for transport errors but if it got aborted by the device it should report upwards. Maybe just turn off barrier support in mpath for now? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-ide" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html