On 2/12/25 6:59 AM, Andrey Ryabinin wrote:
On Tue, Feb 11, 2025 at 5:08 PM Waiman Long <longman@xxxxxxxxxx> wrote:
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 3fe77a360f1c..e1ee687966aa 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -398,9 +398,20 @@ static void print_address_description(void *addr, u8 tag,
pr_err("\n");
}
- if (is_vmalloc_addr(addr)) {
- struct vm_struct *va = find_vm_area(addr);
+ if (!is_vmalloc_addr(addr))
+ goto print_page;
+ /*
+ * RT kernel cannot call find_vm_area() in atomic context.
+ * For !RT kernel, prevent spinlock_t inside raw_spinlock_t warning
+ * by raising wait-type to WAIT_SLEEP.
+ */
+ if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ static DEFINE_WAIT_OVERRIDE_MAP(vmalloc_map, LD_WAIT_SLEEP);
+ struct vm_struct *va;
+
+ lock_map_acquire_try(&vmalloc_map);
+ va = find_vm_area(addr);
Can we hide all this logic behind some function like
kasan_find_vm_area() which would return NULL for -rt?
Sure. We can certainly do that.
if (va) {
pr_err("The buggy address belongs to the virtual mapping at\n"
" [%px, %px) created by:\n"
@@ -410,8 +421,13 @@ static void print_address_description(void *addr, u8 tag,
page = vmalloc_to_page(addr);
Or does vmalloc_to_page() secretly take some lock somewhere so we
need to guard it with this 'vmalloc_map' too?
So my suggestion above wouldn't be enough, if that's the case.
AFAICS, vmalloc_to_page() doesn't seem to take any lock. Even if it
takes another spinlock, it will still be under the vmalloc_map
protection until lock_map_release() is called.
Cheers,
Longman