[PATCH 1/5] target: Fix race between multiple invocations of target_qf_do_work()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Roland Dreier <roland@xxxxxxxxxxxxxxx>

When work is scheduled with schedule_work(), the work can end up
running on multiple CPUs at the same time -- this happens if
the work is already running on one CPU and schedule_work() is called
on another CPU.  This leads to list corruption with target_qf_do_work(),
which is roughly doing:

	spin_lock(...);
	list_for_each_entry_safe(...) {
		list_del(...);
		spin_unlock(...);

		// do stuff

		spin_lock(...);
	}

With multiple CPUs running this code, one CPU can end up deleting the
list entry that the other CPU is about to work on.

Fix this by using queue_work(system_nrt_wq, ...), which uses a
non-reentrant work queue (so the work only runs on one CPU at a time).
Also, while we're working on this code, avoid dropping and reacquiring
the lock to walk the list by splicing the list entries onto a local
list and then operating on that.

Signed-off-by: Roland Dreier <roland@xxxxxxxxxxxxxxx>
---
 drivers/target/target_core_transport.c |   15 +++++++--------
 1 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index a2f4713..66f9945 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -978,15 +978,17 @@ static void target_qf_do_work(struct work_struct *work)
 {
 	struct se_device *dev = container_of(work, struct se_device,
 					qf_work_queue);
+	LIST_HEAD(qf_cmd_list);
 	struct se_cmd *cmd, *cmd_tmp;
 
 	spin_lock_irq(&dev->qf_cmd_lock);
-	list_for_each_entry_safe(cmd, cmd_tmp, &dev->qf_cmd_list, se_qf_node) {
-	
+	list_splice_init(&dev->qf_cmd_list, &qf_cmd_list);
+	spin_unlock_irq(&dev->qf_cmd_lock);
+
+	list_for_each_entry_safe(cmd, cmd_tmp, &qf_cmd_list, se_qf_node) {
 		list_del(&cmd->se_qf_node);
 		atomic_dec(&dev->dev_qf_count);
 		smp_mb__after_atomic_dec();
-		spin_unlock_irq(&dev->qf_cmd_lock);
 
 		pr_debug("Processing %s cmd: %p QUEUE_FULL in work queue"
 			" context: %s\n", cmd->se_tfo->get_fabric_name(), cmd,
@@ -998,10 +1000,7 @@ static void target_qf_do_work(struct work_struct *work)
 		 * has been added to head of queue
 		 */
 		transport_add_cmd_to_queue(cmd, cmd->t_state);
-	
-		spin_lock_irq(&dev->qf_cmd_lock);
 	}
-	spin_unlock_irq(&dev->qf_cmd_lock);
 }
 
 unsigned char *transport_dump_cmd_direction(struct se_cmd *cmd)
@@ -3601,7 +3600,7 @@ static void transport_handle_queue_full(
 	smp_mb__after_atomic_inc();
 	spin_unlock_irq(&cmd->se_dev->qf_cmd_lock);
 
-	schedule_work(&cmd->se_dev->qf_work_queue);
+	queue_work(system_nrt_wq, &cmd->se_dev->qf_work_queue);
 }	
 
 static void transport_generic_complete_ok(struct se_cmd *cmd)
@@ -3619,7 +3618,7 @@ static void transport_generic_complete_ok(struct se_cmd *cmd)
 	 * cmd->transport_qf_callback()
 	 */
 	if (atomic_read(&cmd->se_dev->dev_qf_count) != 0)
-		schedule_work(&cmd->se_dev->qf_work_queue);
+		queue_work(system_nrt_wq, &cmd->se_dev->qf_work_queue);
 
 	if (cmd->transport_qf_callback) {
 		ret = cmd->transport_qf_callback(cmd);
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe target-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux