Re: [PATCH] mm, kasan: don't poison boot memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2/19/2021 11:45 AM, George Kennedy wrote:


On 2/18/2021 7:09 PM, Andrey Konovalov wrote:
On Fri, Feb 19, 2021 at 1:06 AM George Kennedy
<george.kennedy@xxxxxxxxxx> wrote:


On 2/18/2021 3:55 AM, David Hildenbrand wrote:
On 17.02.21 21:56, Andrey Konovalov wrote:
During boot, all non-reserved memblock memory is exposed to the buddy
allocator. Poisoning all that memory with KASAN lengthens boot time,
especially on systems with large amount of RAM. This patch makes
page_alloc to not call kasan_free_pages() on all new memory.

__free_pages_core() is used when exposing fresh memory during system
boot and when onlining memory during hotplug. This patch adds a new
FPI_SKIP_KASAN_POISON flag and passes it to __free_pages_ok() through
free_pages_prepare() from __free_pages_core().

This has little impact on KASAN memory tracking.

Assuming that there are no references to newly exposed pages before they are ever allocated, there won't be any intended (but buggy) accesses to
that memory that KASAN would normally detect.

However, with this patch, KASAN stops detecting wild and large
out-of-bounds accesses that happen to land on a fresh memory page that
was never allocated. This is taken as an acceptable trade-off.

All memory allocated normally when the boot is over keeps getting
poisoned as usual.

Signed-off-by: Andrey Konovalov <andreyknvl@xxxxxxxxxx>
Change-Id: Iae6b1e4bb8216955ffc14af255a7eaaa6f35324d
Not sure this is the right thing to do, see

https://lkml.kernel.org/r/bcf8925d-0949-3fe1-baa8-cc536c529860@xxxxxxxxxx

Reversing the order in which memory gets allocated + used during boot
(in a patch by me) might have revealed an invalid memory access during
boot.

I suspect that that issue would no longer get detected with your
patch, as the invalid memory access would simply not get detected.
Now, I cannot prove that :)
Since David's patch we're having trouble with the iBFT ACPI table, which
is mapped in via kmap() - see acpi_map() in "drivers/acpi/osl.c". KASAN
detects that it is being used after free when ibft_init() accesses the
iBFT table, but as of yet we can't find where it get's freed (we've
instrumented calls to kunmap()).
Maybe it doesn't get freed, but what you see is a wild or a large
out-of-bounds access. Since KASAN marks all memory as freed during the
memblock->page_alloc transition, such bugs can manifest as
use-after-frees.

It gets freed and re-used. By the time the iBFT table is accessed by ibft_init() the page has been over-written.

Setting page flags like the following before the call to kmap() prevents the iBFT table page from being freed:

Cleaned up version:

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 0418feb..8f0a8e7 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -287,9 +287,12 @@ static void __iomem *acpi_map(acpi_physical_address pg_off, unsigned long pg_sz)

     pfn = pg_off >> PAGE_SHIFT;
     if (should_use_kmap(pfn)) {
+        struct page *page = pfn_to_page(pfn);
+
         if (pg_sz > PAGE_SIZE)
             return NULL;
-        return (void __iomem __force *)kmap(pfn_to_page(pfn));
+        SetPageReserved(page);
+        return (void __iomem __force *)kmap(page);
     } else
         return acpi_os_ioremap(pg_off, pg_sz);
 }
@@ -299,9 +302,12 @@ static void acpi_unmap(acpi_physical_address pg_off, void __iomem *vaddr)
     unsigned long pfn;

     pfn = pg_off >> PAGE_SHIFT;
-    if (should_use_kmap(pfn))
-        kunmap(pfn_to_page(pfn));
-    else
+    if (should_use_kmap(pfn)) {
+        struct page *page = pfn_to_page(pfn);
+
+        ClearPageReserved(page);
+        kunmap(page);
+    } else
         iounmap(vaddr);
 }

David, the above works, but wondering why it is now necessary. kunmap() is not hit. What other ways could a page mapped via kmap() be unmapped?

Thank you,
George


diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 0418feb..41c1bbd 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -287,9 +287,14 @@ static void __iomem *acpi_map(acpi_physical_address pg_off, unsigned long pg_sz)

        pfn = pg_off >> PAGE_SHIFT;
        if (should_use_kmap(pfn)) {
+               struct page *page =  pfn_to_page(pfn);
+
                if (pg_sz > PAGE_SIZE)
                        return NULL;
-               return (void __iomem __force *)kmap(pfn_to_page(pfn));
+
+               page->flags |= ((1UL << PG_unevictable) | (1UL << PG_reserved) | (1UL << PG_locked));
+
+               return (void __iomem __force *)kmap(page);
        } else
                return acpi_os_ioremap(pg_off, pg_sz);
 }

Just not sure of the correct way to set the page flags.

George







[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux