Patch "powerpc/pseries: explicitly reschedule during drmem_lmb list traversal" has been added to the 4.19-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    powerpc/pseries: explicitly reschedule during drmem_lmb list traversal

to the 4.19-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     powerpc-pseries-explicitly-reschedule-during-drmem_l.patch
and it can be found in the queue-4.19 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 3dd18ef7e212c25032b5074263d13a35d3012deb
Author: Nathan Lynch <nathanl@xxxxxxxxxxxxx>
Date:   Thu Aug 13 10:11:31 2020 -0500

    powerpc/pseries: explicitly reschedule during drmem_lmb list traversal
    
    [ Upstream commit 9d6792ffe140240ae54c881cc4183f9acc24b4df ]
    
    The drmem lmb list can have hundreds of thousands of entries, and
    unfortunately lookups take the form of linear searches. As long as
    this is the case, traversals have the potential to monopolize the CPU
    and provoke lockup reports, workqueue stalls, and the like unless
    they explicitly yield.
    
    Rather than placing cond_resched() calls within various
    for_each_drmem_lmb() loop blocks in the code, put it in the iteration
    expression of the loop macro itself so users can't omit it.
    
    Introduce a drmem_lmb_next() iteration helper function which calls
    cond_resched() at a regular interval during array traversal. Each
    iteration of the loop in DLPAR code paths can involve around ten RTAS
    calls which can each take up to 250us, so this ensures the check is
    performed at worst every few milliseconds.
    
    Fixes: 6c6ea53725b3 ("powerpc/mm: Separate ibm, dynamic-memory data from DT format")
    Signed-off-by: Nathan Lynch <nathanl@xxxxxxxxxxxxx>
    Reviewed-by: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
    Signed-off-by: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
    Link: https://lore.kernel.org/r/20200813151131.2070161-1-nathanl@xxxxxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
index 9e516fe3daaba..668d8a121f1a0 100644
--- a/arch/powerpc/include/asm/drmem.h
+++ b/arch/powerpc/include/asm/drmem.h
@@ -12,6 +12,8 @@
 #ifndef _ASM_POWERPC_LMB_H
 #define _ASM_POWERPC_LMB_H
 
+#include <linux/sched.h>
+
 struct drmem_lmb {
 	u64     base_addr;
 	u32     drc_index;
@@ -27,8 +29,22 @@ struct drmem_lmb_info {
 
 extern struct drmem_lmb_info *drmem_info;
 
+static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb,
+					       const struct drmem_lmb *start)
+{
+	/*
+	 * DLPAR code paths can take several milliseconds per element
+	 * when interacting with firmware. Ensure that we don't
+	 * unfairly monopolize the CPU.
+	 */
+	if (((++lmb - start) % 16) == 0)
+		cond_resched();
+
+	return lmb;
+}
+
 #define for_each_drmem_lmb_in_range(lmb, start, end)		\
-	for ((lmb) = (start); (lmb) < (end); (lmb)++)
+	for ((lmb) = (start); (lmb) < (end); lmb = drmem_lmb_next(lmb, start))
 
 #define for_each_drmem_lmb(lmb)					\
 	for_each_drmem_lmb_in_range((lmb),			\



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux