[PATCH 6.1 124/193] dmaengine: idxd: Do not call DMX TX callbacks during workqueue disable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Reinette Chatre <reinette.chatre@xxxxxxxxx>

commit 6744a030d81e456883bfbb627ac1f30465c1a989 upstream.

On driver unload any pending descriptors are flushed and pending
DMA descriptors are explicitly completed:
idxd_dmaengine_drv_remove() ->
	drv_disable_wq() ->
		idxd_wq_free_irq() ->
			idxd_flush_pending_descs() ->
				idxd_dma_complete_txd()

With this done during driver unload any remaining descriptor is
likely stuck and can be dropped. Even so, the descriptor may still
have a callback set that could no longer be accessible. An
example of such a problem is when the dmatest fails and the dmatest
module is unloaded. The failure of dmatest leaves descriptors with
dma_async_tx_descriptor::callback pointing to code that no longer
exist. This causes a page fault as below at the time the IDXD driver
is unloaded when it attempts to run the callback:
 BUG: unable to handle page fault for address: ffffffffc0665190
 #PF: supervisor instruction fetch in kernel mode
 #PF: error_code(0x0010) - not-present page

Fix this by clearing the callback pointers on the transmit
descriptors only when workqueue is disabled.

Fixes: 403a2e236538 ("dmaengine: idxd: change MSIX allocation based on per wq activation")
Signed-off-by: Reinette Chatre <reinette.chatre@xxxxxxxxx>
Reviewed-by: Dave Jiang <dave.jiang@xxxxxxxxx>
Reviewed-by: Fenghua Yu <fenghua.yu@xxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Link: https://lore.kernel.org/r/37d06b772aa7f8863ca50f90930ea2fd80b38fc3.1670452419.git.reinette.chatre@xxxxxxxxx
Signed-off-by: Vinod Koul <vkoul@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
 drivers/dma/idxd/device.c |   11 +++++++++++
 1 file changed, 11 insertions(+)

--- a/drivers/dma/idxd/device.c
+++ b/drivers/dma/idxd/device.c
@@ -1173,8 +1173,19 @@ static void idxd_flush_pending_descs(str
 	spin_unlock(&ie->list_lock);
 
 	list_for_each_entry_safe(desc, itr, &flist, list) {
+		struct dma_async_tx_descriptor *tx;
+
 		list_del(&desc->list);
 		ctype = desc->completion->status ? IDXD_COMPLETE_NORMAL : IDXD_COMPLETE_ABORT;
+		/*
+		 * wq is being disabled. Any remaining descriptors are
+		 * likely to be stuck and can be dropped. callback could
+		 * point to code that is no longer accessible, for example
+		 * if dmatest module has been unloaded.
+		 */
+		tx = &desc->txd;
+		tx->callback = NULL;
+		tx->callback_result = NULL;
 		idxd_dma_complete_txd(desc, ctype, true);
 	}
 }





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux