On 06/08/2015 05:30 PM, Matthew R. Ochs wrote: > + > +/** > + * cxlflash_send_cmd() - sends an AFU command > + * @afu: AFU associated with the host. > + * @cmd: AFU command to send. > + * > + * Return: > + * 0 on success > + * -1 on failure > + */ > +int cxlflash_send_cmd(struct afu *afu, struct afu_cmd *cmd) > +{ > + int nretry = 0; > + int rc = 0; > + u64 room; > + long newval; > + > + /* > + * This routine is used by critical users such an AFU sync and to > + * send a task management function (TMF). Thus we want to retry a > + * bit before returning an error. To avoid the performance penalty > + * of MMIO, we spread the update of 'room' over multiple commands. > + */ > +retry: > + newval = atomic64_dec_if_positive(&afu->room); > + if (!newval) { > + do { > + room = readq_be(&afu->host_map->cmd_room); > + atomic64_set(&afu->room, room); > + if (room) > + goto write_ioarrin; > + } while (nretry++ < MC_ROOM_RETRY_CNT); It looks like you removed the udelay here. Was that intentional? > + > + pr_err("%s: no cmd_room to send 0x%X\n", > + __func__, cmd->rcb.cdb[0]); > + rc = SCSI_MLQUEUE_HOST_BUSY; If you actually get here, how do you get out of this state? Since now afu->room is zero and anyone that comes through here will go to the else if leg. > + goto out; > + } else if (unlikely(newval < 0)) { > + /* This should be rare. i.e. Only if two threads race and > + * decrement before the MMIO read is done. In this case > + * just benefit from the other thread having updated > + * afu->room. > + */ > + if (nretry++ < MC_ROOM_RETRY_CNT) I'm guessing you'd want the udelay here as well. > + goto retry; > + else { > + rc = SCSI_MLQUEUE_HOST_BUSY; > + goto out; > + } > + } > + > +write_ioarrin: > + writeq_be((u64)&cmd->rcb, &afu->host_map->ioarrin); > +out: > + pr_debug("%s: cmd=%p len=%d ea=%p rc=%d\n", __func__, cmd, > + cmd->rcb.data_len, (void *)cmd->rcb.data_ea, rc); > + return rc; > +} > + -- Brian King Power Linux I/O IBM Linux Technology Center -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html