Hi Dinh,
On 6/14/23 00:39, Dinh Nguyen wrote:
Thanks for the patch. Does it need a Fixes tag?
I did not add a fixes tag for the parisc or arm version.
The code was originally correct, but later patches then suddenly started
using cache flushes from irq context (e.g. 21b40200cfe96 ("aio: use
flush_dcache_page()")) which then triggers the bug.
So, it's hard to say that it fixes one specific commit.
I suggest you backport it as far as possible.
Helge
Dinh
On 5/24/23 10:26, Helge Deller wrote:
Since at least kernel 6.1, flush_dcache_page() is called with IRQs
disabled, e.g. from aio_complete().
But the current implementation for flush_dcache_page() on NIOS2
unintentionally re-enables IRQs, which may lead to deadlocks.
Fix it by using xa_lock_irqsave() and xa_unlock_irqrestore()
for the flush_dcache_mmap_*lock() macros instead.
Cc: Dinh Nguyen <dinguyen@xxxxxxxxxx>
Signed-off-by: Helge Deller <deller@xxxxxx>
---
arch/nios2/include/asm/cacheflush.h | 4 ++++
arch/nios2/mm/cacheflush.c | 5 +++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/nios2/include/asm/cacheflush.h b/arch/nios2/include/asm/cacheflush.h
index d0b71dd71287..a37242662809 100644
--- a/arch/nios2/include/asm/cacheflush.h
+++ b/arch/nios2/include/asm/cacheflush.h
@@ -48,5 +48,9 @@ extern void invalidate_dcache_range(unsigned long start, unsigned long end);
#define flush_dcache_mmap_lock(mapping) xa_lock_irq(&mapping->i_pages)
#define flush_dcache_mmap_unlock(mapping) xa_unlock_irq(&mapping->i_pages)
+#define flush_dcache_mmap_lock_irqsave(mapping, flags) \
+ xa_lock_irqsave(&mapping->i_pages, flags)
+#define flush_dcache_mmap_unlock_irqrestore(mapping, flags) \
+ xa_unlock_irqrestore(&mapping->i_pages, flags)
#endif /* _ASM_NIOS2_CACHEFLUSH_H */
diff --git a/arch/nios2/mm/cacheflush.c b/arch/nios2/mm/cacheflush.c
index 6aa9257c3ede..35f3b599187f 100644
--- a/arch/nios2/mm/cacheflush.c
+++ b/arch/nios2/mm/cacheflush.c
@@ -75,11 +75,12 @@ static void flush_aliases(struct address_space *mapping, struct page *page)
{
struct mm_struct *mm = current->active_mm;
struct vm_area_struct *mpnt;
+ unsigned long flags;
pgoff_t pgoff;
pgoff = page->index;
- flush_dcache_mmap_lock(mapping);
+ flush_dcache_mmap_lock_irqsave(mapping, flags);
vma_interval_tree_foreach(mpnt, &mapping->i_mmap, pgoff, pgoff) {
unsigned long offset;
@@ -92,7 +93,7 @@ static void flush_aliases(struct address_space *mapping, struct page *page)
flush_cache_page(mpnt, mpnt->vm_start + offset,
page_to_pfn(page));
}
- flush_dcache_mmap_unlock(mapping);
+ flush_dcache_mmap_unlock_irqrestore(mapping, flags);
}
void flush_cache_all(void)
--
2.38.1