Backport request: parisc: Fix flush_dcache_page() for usage from irq context

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear stable kernel team,

could you please add the patch below to all stable kernels
from v4.19 up to 5.15.

It's a manual backport of upstream commit 61e150fb310729c98227a5edf6e4a3619edc3702,
which doesn't applies cleanly otherwise.

Thanks!
Helge

>From 97d6d8f6248364ec916e9642a58f1ed14a1eb147 Mon Sep 17 00:00:00 2001
From: Helge Deller <deller@xxxxxx>
Date: Fri, 26 May 2023 22:51:07 +0200
Subject: [PATCH] parisc: Fix flush_dcache_page() for usage from irq context

flush_dcache_page() may be called with IRQs disabled.

But the current implementation for flush_dcache_page() on parisc
unintentionally re-enables IRQs, which may lead to deadlocks.

Fix it by using xa_lock_irqsave() and xa_unlock_irqrestore()
for the flush_dcache_mmap_*lock() macros instead.

Signed-off-by: Helge Deller <deller@xxxxxx>
---
 arch/parisc/include/asm/cacheflush.h | 5 +++++
 arch/parisc/kernel/cache.c           | 5 +++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
index eef0096db5f8..2f4c45f60ae1 100644
--- a/arch/parisc/include/asm/cacheflush.h
+++ b/arch/parisc/include/asm/cacheflush.h
@@ -53,6 +53,11 @@ extern void flush_dcache_page(struct page *page);

 #define flush_dcache_mmap_lock(mapping)		xa_lock_irq(&mapping->i_pages)
 #define flush_dcache_mmap_unlock(mapping)	xa_unlock_irq(&mapping->i_pages)
+#define flush_dcache_mmap_lock_irqsave(mapping, flags)		\
+		xa_lock_irqsave(&mapping->i_pages, flags)
+#define flush_dcache_mmap_unlock_irqrestore(mapping, flags)	\
+		xa_unlock_irqrestore(&mapping->i_pages, flags)
+

 #define flush_icache_page(vma,page)	do { 		\
 	flush_kernel_dcache_page_addr(page_address(page)); \
diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
index 394e6e14e5c4..c473c2f395a0 100644
--- a/arch/parisc/kernel/cache.c
+++ b/arch/parisc/kernel/cache.c
@@ -324,6 +324,7 @@ void flush_dcache_page(struct page *page)
 	struct vm_area_struct *mpnt;
 	unsigned long offset;
 	unsigned long addr, old_addr = 0;
+	unsigned long flags;
 	pgoff_t pgoff;

 	if (mapping && !mapping_mapped(mapping)) {
@@ -343,7 +344,7 @@ void flush_dcache_page(struct page *page)
 	 * declared as MAP_PRIVATE or MAP_SHARED), so we only need
 	 * to flush one address here for them all to become coherent */

-	flush_dcache_mmap_lock(mapping);
+	flush_dcache_mmap_lock_irqsave(mapping, flags);
 	vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
 		offset = (pgoff - mpnt->vm_pgoff) << PAGE_SHIFT;
 		addr = mpnt->vm_start + offset;
@@ -366,7 +367,7 @@ void flush_dcache_page(struct page *page)
 			old_addr = addr;
 		}
 	}
-	flush_dcache_mmap_unlock(mapping);
+	flush_dcache_mmap_unlock_irqrestore(mapping, flags);
 }
 EXPORT_SYMBOL(flush_dcache_page);

--
2.38.1





[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux