On Thu, 2017-02-09 at 02:43 -0800, Nicholas A. Bellinger wrote: > diff --git a/drivers/target/target_core_tmr.c b/drivers/target/target_core_tmr.c > > index 161c8b8482db..980d94277c8d 100644 > > --- a/drivers/target/target_core_tmr.c > > +++ b/drivers/target/target_core_tmr.c > > @@ -192,13 +173,13 @@ void core_tmr_abort_task( > > continue; > > } > > > > + se_cmd->send_abort_response = false; > > list_del_init(&se_cmd->se_cmd_list); > > spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); > > > > - cancel_work_sync(&se_cmd->work); > > - transport_wait_for_tasks(se_cmd); > > + while (!wait_for_completion_timeout(&se_cmd->complete, 180 * HZ)) > > + target_show_cmd(__func__, se_cmd); > > > > This is still fundamentally wrong, because it allows se_cmd to be > dispatched into the backend driver, and only just waits for completion > from the backend after the fact. I disagree. The CMD_T_ABORTED flag is tested in the command execution path everywhere CMD_T_STOP is tested so setting CMD_T_ABORTED only is sufficient. It is not necessary to set both CMD_T_ABORTED and CMD_T_STOP. > diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c > > index 533b584c2806..aac1fc1b2a7c 100644 > > --- a/drivers/target/target_core_transport.c > > +++ b/drivers/target/target_core_transport.c > > @@ -714,16 +739,7 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status) > > success = 1; > > } > > > > - /* > > - * Check for case where an explicit ABORT_TASK has been received > > - * and transport_wait_for_tasks() will be waiting for completion.. > > - */ > > - if (cmd->transport_state & CMD_T_ABORTED || > > - cmd->transport_state & CMD_T_STOP) { > > - spin_unlock_irqrestore(&cmd->t_state_lock, flags); > > - complete_all(&cmd->t_transport_stop_comp); > > - return; > > - } else if (!success) { > > This can't be removed because __transport_wait_for_tasks() needs to > catch the CMD_T_STOP or CMD_T_ABORTED quiesce here. You have quoted my patch in such a way that it becomes misleading. I have *not* removed the CMD_T_ABORTED and CMD_T_STOP tests. I have moved these up. You should have noticed that. > > @@ -1886,6 +1904,7 @@ void target_execute_cmd(struct se_cmd *cmd) > > spin_lock_irq(&cmd->t_state_lock); > > if (__transport_check_aborted_status(cmd, 1)) { > > spin_unlock_irq(&cmd->t_state_lock); > > + transport_handle_abort(cmd); > > return; > > } > > __transport_check_aborted_status() must honor delayed TAS, and allow the > fabric to complete WRITE transfer before proceeding with abort. There is no such concept as "delayed TAS" in the SCSI spec. TAS stands for "task aborted status". If this bit is set then a TASK ABORTED status has to be sent back to the initiator if the abort was triggered from another I_T nexus than the one through the SCSI command was received. My patch does that. > > - /* > > - * If the task has been internally aborted due to TMR ABORT_TASK > > - * or LUN_RESET, target_core_tmr.c is responsible for performing > > - * the remaining calls to target_put_sess_cmd(), and not the > > - * callers of this function. > > - */ > > if (aborted) { > > pr_debug("Detected CMD_T_ABORTED for ITT: %llu\n", cmd->tag); > > wait_for_completion(&cmd->cmd_wait_comp); > > - transport_put_cmd(cmd); > > - ret = 1; > > } > > - return ret; > > + return transport_put_cmd(cmd); > > As mentioned in patch #14, there is no way this can possibly work. > > The wait_for_completion(&cmd->cmd_wait_comp) is woken up when > transport_put_cmd() -> kref_put() of se_cmd->cmd_kref reaches zero, so > there is no way the wait_for_completion during CMD_T_ABORTED can ever > make forward progress. I will remove cmd_wait_comp. > Getting rid of the ->write_pending_status() here and DELAYED_TAS > completely breaks WRITEs during CMD_T_ABORTED for iscsi-target and > qla2xxx, when the outstanding WRITE transfer _must be_ allowed to > complete before proceeding with abort. My patch lets outstanding writes complete before proceeding with an abort. Bart.-- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html