Patch "iommu/vt-d: Fix potential lockup if qi_submit_sync called with 0 count" has been added to the 6.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    iommu/vt-d: Fix potential lockup if qi_submit_sync called with 0 count

to the 6.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     iommu-vt-d-fix-potential-lockup-if-qi_submit_sync-ca.patch
and it can be found in the queue-6.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit fa7afc980c224a064fab1c9b906f095eb1a82bb2
Author: Sanjay K Kumar <sanjay.k.kumar@xxxxxxxxx>
Date:   Mon Sep 2 10:27:18 2024 +0800

    iommu/vt-d: Fix potential lockup if qi_submit_sync called with 0 count
    
    [ Upstream commit 3cf74230c139f208b7fb313ae0054386eee31a81 ]
    
    If qi_submit_sync() is invoked with 0 invalidation descriptors (for
    instance, for DMA draining purposes), we can run into a bug where a
    submitting thread fails to detect the completion of invalidation_wait.
    Subsequently, this led to a soft lockup. Currently, there is no impact
    by this bug on the existing users because no callers are submitting
    invalidations with 0 descriptors. This fix will enable future users
    (such as DMA drain) calling qi_submit_sync() with 0 count.
    
    Suppose thread T1 invokes qi_submit_sync() with non-zero descriptors, while
    concurrently, thread T2 calls qi_submit_sync() with zero descriptors. Both
    threads then enter a while loop, waiting for their respective descriptors
    to complete. T1 detects its completion (i.e., T1's invalidation_wait status
    changes to QI_DONE by HW) and proceeds to call reclaim_free_desc() to
    reclaim all descriptors, potentially including adjacent ones of other
    threads that are also marked as QI_DONE.
    
    During this time, while T2 is waiting to acquire the qi->q_lock, the IOMMU
    hardware may complete the invalidation for T2, setting its status to
    QI_DONE. However, if T1's execution of reclaim_free_desc() frees T2's
    invalidation_wait descriptor and changes its status to QI_FREE, T2 will
    not observe the QI_DONE status for its invalidation_wait and will
    indefinitely remain stuck.
    
    This soft lockup does not occur when only non-zero descriptors are
    submitted.In such cases, invalidation descriptors are interspersed among
    wait descriptors with the status QI_IN_USE, acting as barriers. These
    barriers prevent the reclaim code from mistakenly freeing descriptors
    belonging to other submitters.
    
    Considered the following example timeline:
            T1                      T2
    ========================================
            ID1
            WD1
            while(WD1!=QI_DONE)
            unlock
                                    lock
            WD1=QI_DONE*            WD2
                                    while(WD2!=QI_DONE)
                                    unlock
            lock
            WD1==QI_DONE?
            ID1=QI_DONE             WD2=DONE*
            reclaim()
            ID1=FREE
            WD1=FREE
            WD2=FREE
            unlock
                                    soft lockup! T2 never sees QI_DONE in WD2
    
    Where:
    ID = invalidation descriptor
    WD = wait descriptor
    * Written by hardware
    
    The root of the problem is that the descriptor status QI_DONE flag is used
    for two conflicting purposes:
    1. signal a descriptor is ready for reclaim (to be freed)
    2. signal by the hardware that a wait descriptor is complete
    
    The solution (in this patch) is state separation by using QI_FREE flag
    for #1.
    
    Once a thread's invalidation descriptors are complete, their status would
    be set to QI_FREE. The reclaim_free_desc() function would then only
    free descriptors marked as QI_FREE instead of those marked as
    QI_DONE. This change ensures that T2 (from the previous example) will
    correctly observe the completion of its invalidation_wait (marked as
    QI_DONE).
    
    Signed-off-by: Sanjay K Kumar <sanjay.k.kumar@xxxxxxxxx>
    Signed-off-by: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx>
    Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
    Link: https://lore.kernel.org/r/20240728210059.1964602-1-jacob.jun.pan@xxxxxxxxxxxxxxx
    Signed-off-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
    Signed-off-by: Joerg Roedel <jroedel@xxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index 1c8d3141cb55c..01e157d89a163 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1204,9 +1204,7 @@ static void free_iommu(struct intel_iommu *iommu)
  */
 static inline void reclaim_free_desc(struct q_inval *qi)
 {
-	while (qi->desc_status[qi->free_tail] == QI_DONE ||
-	       qi->desc_status[qi->free_tail] == QI_ABORT) {
-		qi->desc_status[qi->free_tail] = QI_FREE;
+	while (qi->desc_status[qi->free_tail] == QI_FREE && qi->free_tail != qi->free_head) {
 		qi->free_tail = (qi->free_tail + 1) % QI_LENGTH;
 		qi->free_cnt++;
 	}
@@ -1463,8 +1461,16 @@ int qi_submit_sync(struct intel_iommu *iommu, struct qi_desc *desc,
 		raw_spin_lock(&qi->q_lock);
 	}
 
-	for (i = 0; i < count; i++)
-		qi->desc_status[(index + i) % QI_LENGTH] = QI_DONE;
+	/*
+	 * The reclaim code can free descriptors from multiple submissions
+	 * starting from the tail of the queue. When count == 0, the
+	 * status of the standalone wait descriptor at the tail of the queue
+	 * must be set to QI_FREE to allow the reclaim code to proceed.
+	 * It is also possible that descriptors from one of the previous
+	 * submissions has to be reclaimed by a subsequent submission.
+	 */
+	for (i = 0; i <= count; i++)
+		qi->desc_status[(index + i) % QI_LENGTH] = QI_FREE;
 
 	reclaim_free_desc(qi);
 	raw_spin_unlock_irqrestore(&qi->q_lock, flags);




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux