* Jun'ichi Nomura
IMO, it's not a design limitation and should be solved in future.
That's good to know. Hopefully someone will fix it...
Current code uses 'no_flush' unconditionally. I think it's possible to improve it to do that only when queue_if_no_path is enabled.
It certainly would be an improvement if no_flush and queue_if_no_path could be automatically disabled in the short time it will take to reload the multipath map with an increased size. For extra safety the tool could also verify that no paths were failed before doing so, maybe. If everything is working fine it seems rather unlikely that all paths will fail and all HBA driver timeouts will expire in the second it takes to reload the multipath map.
Hmm, sorry. I misread your first message. Since the feature is added after 0.4.7, your problem may be caused by other reason. The call to dm_task_no_flush() is added after the release of 0.4.7. It should be in dm_simplecmd() in libmultipath/devmapper.c.
Interesting. I sent an email titled "dm-rdac not working?" earlier today, where I described another problem with the RDAC hwhandler that also caused I/O errors to propagate up from device-mapper into VFS, even though queue_if_no_path was in use. Is it possible, you think, that this misbehaviour was due to having loaded the multipath maps using 0.4.7 and therefore without no_flush being set? (Or in other words: Is 0.4.8 a requirement for queue_if_no_path to work correctly?) I found the dm_task_no_flush() call in the 0.4.8 sources, thanks! Regards -- Tore Anderson -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel