> -----Original Message----- > From: Johannes Thumshirn [mailto:jthumshirn@xxxxxxx] > Sent: Tuesday, June 28, 2016 2:54 AM > To: Don Brace > Cc: jejb@xxxxxxxxxxxxxxxxxx; Viswas G; Mahesh Rajashekhara; > hch@xxxxxxxxxxxxx; Scott Teel; Kevin Barnett; Justin Lindley; Scott Benesh; > elliott@xxxxxxx; linux-scsi@xxxxxxxxxxxxxxx > Subject: Re: [PATCH V3 1/2] smartpqi: initial commit of Microsemi smartpqi > driver > > Anyway, a IMHO more important question > > > > > +static void pqi_complete_queued_requests(struct pqi_ctrl_info > *ctrl_info, > > > > + struct pqi_scsi_dev_t *device_in_reset) > > > > +{ > > > > + unsigned int i; > > > > + unsigned int path; > > > > + struct pqi_queue_group *queue_group; > > > > + unsigned long flags; > > > > + struct pqi_io_request *io_request; > > > > + struct pqi_io_request *next; > > > > + struct scsi_cmnd *scmd; > > > > + struct pqi_scsi_dev_t *device; > > > > + > > > > + for (i = 0; i < ctrl_info->num_queue_groups; i++) { > > > > + queue_group = &ctrl_info->queue_groups[i]; > > > > + > > > > + for (path = 0; path < 2; path++) { > > > > + spin_lock_irqsave(&queue_group->submit_lock[path], > > > > + flags); > > > > + > > > > + list_for_each_entry_safe(io_request, next, > > > > + &queue_group->request_list[path], > > > > + request_list_entry) { > > > > + scmd = io_request->scmd; > > > > + if (scmd) { > > > > + device = scmd->device->hostdata; > > > > + if (device == device_in_reset) { > > > > + set_host_byte(scmd, DID_RESET); > > > > + pqi_scsi_done(scmd); > > > > + list_del(&io_request-> > > > > + request_list_entry); > > > > + } > > > > + } > > > > + } > > > > + > > > > + spin_unlock_irqrestore(&queue_group- > >submit_lock[path], > > > > + flags); > > > > + } > > > > > > If you'd factor the inner for loop into an own small (inline) function, it > > > could drastically improve readability here. > > > > > > something like > > > > > > static void pqi_complete_queue_group(struct pqi_queue_group *qg, > struct > > > pqi_device_t *device_in_reset) > > > { > > > unsigned int path; > > > unsigned long flags; > > > struct qpi_scsi_dev_t *dev; > > > struct pqi_io_request *ior, *next; > > > > > > for (path = 0; path < 2; path++) { > > > spin_lock_irqsave(&qg->submit_lock[path], flags); > > > > > > list_for_each_entry_save(ior, next, &qg->request_list[path], > > > request_list_entry) { > > > scmd = ior->scmnd; > > > if (!scmd) > > > continue; > > > > > > device = scmnd->device->hostdata; > > > if (dev == device_in_reset) { > > > set_host_byte(scmnd, DID_RESET); > > > pqi_scsi_done(scmnd); > > > list_del(&ior->request_list_entry); > > > } > > > } > > > spin_unlock_irqresetore(&qg->submit_lock[path], flags); > > > } > > > } > > Does this have to be under a spin_lock and then even the irqsave > variant? From my understanding scsi-mq/blk-mq is supposed to place the > queues per-cpu. So if you're now deactivating local IRQs here you're > throwing > all optimizations out of the window. Can't you have the request lists and > queues per-cpu so this code can run lockless? > > Thanks, > Johannes > The current version was developed on older kernels which did not have scsi-mq enabled. We plan on fully supporting scsi-mq in the near future, but would like to keep the kernel.org and internal versions the same for now. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html