Hi, This is regarding an issue associated with multipath module. The reconfigure of
multipathd is dropping faulty paths from the maps. Consider a case as given below where a path is faulty. On doing a reconfigure, these faulty paths are dropped from the maps. This behavior prevents the path checkers to detect any changes that occur
to these faulty paths asynchronously. Hence once the paths are up, an explicit reconfigure is needed to use these faulty paths. These devices are still present in the system and they are added back into the system only if
dev_loss_tmo is set to a finite value. Is this how reconfigure designed to behave or this is an bug ?
[root@dt02 ~]# multipathd -k"show top" create: 360a980004334694b50346e714b775277 dm-5 NETAPP,LUN size=768M features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=2 status=active |- 19:0:0:0 sdj 8:144 active ready running `- 20:0:0:0 sdk 8:160 failed faulty offline [root@dt02 ~]# multipathd -k"reconf" ok [root@dt02 ~]# multipathd -k"show top" 360a980004334694b50346e714b775277 dm-5 NETAPP,LUN size=768M features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=2 status=active `- 19:0:0:0 sdj 8:144 active ready running Please let me know if you need more information. This was tested by setting
dev_loss_tmo to “infinity” and DM library (device-mapper-multipath-libs-0.4.9-66.el7.x86_64) with CentOS. It would be really grateful if you can share your thoughts on the same. Thanks, Sharath Babu | Software Engineer 2 – XenServer Dev, India | M: +91 9003100899 |
-- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel