Our test group has generated a case where a MTU change or an ifdown was performed while having active sessions outstanding. While the disconnection effort works normally, but for the case where iscsid is forced to stop, our driver was previously designed to dead wait for 240s before resuming operation. Once the wait is over, any leftover chip resources allocated + previously active iscsi_endpoints would be forfeited. For this kind of destructive testing, we decided that it is best to clean up our chip so that other operations such as L2 or FCoE would not get affected. The 'slow' case mentioned is just a hypothetical case which we have not encountered first hand. But the code has been worked so that if iscsid were to come back up and continue to call ep_disconnect again, only the endpoints would get cleanup and will no longer attempt to clean up the hardware. We're not trying to work around a bug here... For the case presented, bnx2i_stop was actually called from cnic's ulp_stop due to the different test case described. The hard 240s timeout was there to wait for iscsid to cleanup all the active connections. If it were to be exceeded somehow, the same lost resource + endpoint condition would happen. Eddie On Tue, 2010-06-29 at 23:11 -0700, Mike Christie wrote: > On 06/25/2010 08:39 PM, Eddie Wai wrote: > > For cases where the iSCSI disconnection procedure times out due to > > the iSCSI daemon being slow or unresponsive, the bnx2i_stop routine > > Could you describe when iscsid is slow a little more? The unresponsive > case sounds like if iscsid is not running, right? For the slow case, it > sounds like you are trying to work around a bug in there. > > I am also not sure how it helps exactly. iscsid still has to cleanup the > iscsi resources like the scsi commands running on the connection (conn > stop is called after ep disconnect), so I am not sure how this helps. > Does doing it this way skip some steps? > -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html