James Bottomley wrote:
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index f5d3b96..979e07a 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -567,15 +567,18 @@ static inline int scsi_host_is_busy(struct Scsi_Host *shost)
*/
static void scsi_run_queue(struct request_queue *q)
{
- struct scsi_device *starved_head = NULL, *sdev = q->queuedata;
+ struct scsi_device *tmp, *sdev = q->queuedata;
struct Scsi_Host *shost = sdev->host;
+ LIST_HEAD(starved_list);
unsigned long flags;
if (scsi_target(sdev)->single_lun)
scsi_single_lun_run(sdev);
spin_lock_irqsave(shost->host_lock, flags);
- while (!list_empty(&shost->starved_list) && !scsi_host_is_busy(shost)) {
+ list_splice_init(&shost->starved_list, &starved_list);
+
+ list_for_each_entry_safe(sdev, tmp, &starved_list, starved_entry) {
int flagset;
I do not think we can use list_for_each_entry_safe. It might be he cause
of the oops in the other mail. If we use list_for_each_entry_safe here,
but then some other process like the kernel block workueue calls the
request_fn of a device on the starved list then we can go from
scsi_request_fn -> scsi_host_queue_ready which can do:
/* We're OK to process the command, so we can't be starved */
if (!list_empty(&sdev->starved_entry))
list_del_init(&sdev->starved_entry);
and that can end up removing the sdev from scsi_run_queue's spliced
starved list. And so if the kblock workqueue did this to multiple
devices while scsi_run_queue has dropped the host lock then I do not
think list_for_each_entry_safe can handle that.
I can sort of replicate this now. Let me do some testing on the changes
and I will submit something in a minute.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html