Re: [PATCH v2 13/14] x86: runtime_const used for KASAN_SHADOW_END

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2025-02-20 at 00:31:08 +0100, Andrey Konovalov wrote:
>On Tue, Feb 18, 2025 at 9:20 AM Maciej Wieczor-Retman
><maciej.wieczor-retman@xxxxxxxxx> wrote:
>>
>> On x86, generic KASAN is setup in a way that needs a single
>> KASAN_SHADOW_OFFSET value for both 4 and 5 level paging. It's required
>> to facilitate boot time switching and it's a compiler ABI so it can't be
>> changed during runtime.
>>
>> Software tag-based mode doesn't tie shadow start and end to any linear
>> addresses as part of the compiler ABI so it can be changed during
>> runtime.
>
>KASAN_SHADOW_OFFSET is passed to the compiler via
>hwasan-mapping-offset, see scripts/Makefile.kasan (for the INLINE
>mode). So while we can change its value, it has to be known at compile
>time. So I don't think using a runtime constant would work.

I don't know about arm64, but this doesn't seem to work right now on x86. I
think I recall that hwasan-mapping-offset isn't implemented on the x86 LLVM or
something like that? I'm sure I saw some note about it a while ago on the
internet but I couldn't find it today.

Anyway if KASAN_SHADOW_OFFSET is not set at compile time it defaults to nothing
and just doesn't get passed into kasan-params a few lines below. I assume that
result seems a little too makeshift for runtime const to make sense here?

>
>Which means that KASAN_SHADOW_OFFSET has to have such a value that
>works for both 4 and 5 level page tables. This possibly means we might
>need something different than the first patch in this series.

I'll think again about doing one offset for both paging levels so that it's as
optimal as possible.

>
>But in case I'm wrong, I left comments for the current code below.
>
>> This notion, for KASAN purposes, allows to optimize out macros
>> such us pgtable_l5_enabled() which would otherwise be used in every
>> single KASAN related function.
>>
>> Use runtime_const infrastructure with pgtable_l5_enabled() to initialize
>> the end address of KASAN's shadow address space. It's a good choice
>> since in software tag based mode KASAN_SHADOW_OFFSET and
>> KASAN_SHADOW_END refer to the same value and the offset in
>> kasan_mem_to_shadow() is a signed negative value.
>>
>> Setup KASAN_SHADOW_END values so that they're aligned to 4TB in 4-level
>> paging mode and to 2PB in 5-level paging mode. Also update x86 memory
>> map documentation.
>>
>> Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@xxxxxxxxx>
>> ---
>> Changelog v2:
>> - Change documentation kasan start address to non-dense values.
>>
>>  Documentation/arch/x86/x86_64/mm.rst |  6 ++++--
>>  arch/x86/Kconfig                     |  3 +--
>>  arch/x86/include/asm/kasan.h         | 14 +++++++++++++-
>>  arch/x86/kernel/vmlinux.lds.S        |  1 +
>>  arch/x86/mm/kasan_init_64.c          |  5 ++++-
>>  5 files changed, 23 insertions(+), 6 deletions(-)
>>
>> diff --git a/Documentation/arch/x86/x86_64/mm.rst b/Documentation/arch/x86/x86_64/mm.rst
>> index f2db178b353f..5014ec322e19 100644
>> --- a/Documentation/arch/x86/x86_64/mm.rst
>> +++ b/Documentation/arch/x86/x86_64/mm.rst
>> @@ -60,7 +60,8 @@ Complete virtual memory map with 4-level page tables
>>     ffffe90000000000 |  -23    TB | ffffe9ffffffffff |    1 TB | ... unused hole
>>     ffffea0000000000 |  -22    TB | ffffeaffffffffff |    1 TB | virtual memory map (vmemmap_base)
>>     ffffeb0000000000 |  -21    TB | ffffebffffffffff |    1 TB | ... unused hole
>> -   ffffec0000000000 |  -20    TB | fffffbffffffffff |   16 TB | KASAN shadow memory
>> +   ffffec0000000000 |  -20    TB | fffffbffffffffff |   16 TB | KASAN shadow memory (generic mode)
>> +   fffff40000000000 |   -8    TB | fffffc0000000000 |    8 TB | KASAN shadow memory (software tag-based mode)
>>    __________________|____________|__________________|_________|____________________________________________________________
>>                                                                |
>>                                                                | Identical layout to the 56-bit one from here on:
>> @@ -130,7 +131,8 @@ Complete virtual memory map with 5-level page tables
>>     ffd2000000000000 |  -11.5  PB | ffd3ffffffffffff |  0.5 PB | ... unused hole
>>     ffd4000000000000 |  -11    PB | ffd5ffffffffffff |  0.5 PB | virtual memory map (vmemmap_base)
>>     ffd6000000000000 |  -10.5  PB | ffdeffffffffffff | 2.25 PB | ... unused hole
>> -   ffdf000000000000 |   -8.25 PB | fffffbffffffffff |   ~8 PB | KASAN shadow memory
>> +   ffdf000000000000 |   -8.25 PB | fffffbffffffffff |   ~8 PB | KASAN shadow memory (generic mode)
>> +   ffe0000000000000 |   -6    PB | fff0000000000000 |    4 PB | KASAN shadow memory (software tag-based mode)
>>    __________________|____________|__________________|_________|____________________________________________________________
>>                                                                |
>>                                                                | Identical layout to the 47-bit one from here on:
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index 6df7779ed6da..f4ef64bf824a 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -400,8 +400,7 @@ config AUDIT_ARCH
>>
>>  config KASAN_SHADOW_OFFSET
>>         hex
>> -       depends on KASAN
>> -       default 0xdffffc0000000000
>> +       default 0xdffffc0000000000 if KASAN_GENERIC
>
>Let's put a comment here explaining what happens if !KASAN_GENERIC.
>
>Also, as I mentioned in the first patch, we need to figure out what to
>do with scripts/gdb/linux/kasan.py.

I'll look through the scripts. Maybe it's possible to figure out if 5-level
paging is enabled from there and setup the kasan offset based on that.

>
>>
>>  config HAVE_INTEL_TXT
>>         def_bool y
>> diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h
>> index a75f0748a4b6..4bfd3641af84 100644
>> --- a/arch/x86/include/asm/kasan.h
>> +++ b/arch/x86/include/asm/kasan.h
>> @@ -5,7 +5,7 @@
>>  #include <linux/const.h>
>>  #include <linux/kasan-tags.h>
>>  #include <linux/types.h>
>> -#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>> +
>>  #define KASAN_SHADOW_SCALE_SHIFT 3
>>
>>  /*
>> @@ -14,6 +14,8 @@
>>   * for kernel really starts from compiler's shadow offset +
>>   * 'kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT
>>   */
>> +#ifdef CONFIG_KASAN_GENERIC
>> +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>>  #define KASAN_SHADOW_START      (KASAN_SHADOW_OFFSET + \
>>                                         ((-1UL << __VIRTUAL_MASK_SHIFT) >> \
>>                                                 KASAN_SHADOW_SCALE_SHIFT))
>> @@ -24,12 +26,22 @@
>>  #define KASAN_SHADOW_END        (KASAN_SHADOW_START + \
>>                                         (1ULL << (__VIRTUAL_MASK_SHIFT - \
>>                                                   KASAN_SHADOW_SCALE_SHIFT)))
>> +#endif
>> +
>>
>>  #ifndef __ASSEMBLY__
>> +#include <asm/runtime-const.h>
>>  #include <linux/bitops.h>
>>  #include <linux/bitfield.h>
>>  #include <linux/bits.h>
>>
>> +#ifdef CONFIG_KASAN_SW_TAGS
>> +extern unsigned long KASAN_SHADOW_END_RC;
>> +#define KASAN_SHADOW_END       runtime_const_ptr(KASAN_SHADOW_END_RC)
>> +#define KASAN_SHADOW_OFFSET    KASAN_SHADOW_END
>> +#define KASAN_SHADOW_START     (KASAN_SHADOW_END - ((UL(1)) << (__VIRTUAL_MASK_SHIFT - KASAN_SHADOW_SCALE_SHIFT)))
>
>Any reason these are under __ASSEMBLY__? They seem to belong better
>together with the CONFIG_KASAN_GENERIC definitions above.

I remember getting a wall of odd looking compile errors when this wasn't under
assembly. But I'll recheck.

>
>> +#endif
>> +
>>  #define arch_kasan_set_tag(addr, tag)  __tag_set(addr, tag)
>>  #define arch_kasan_reset_tag(addr)     __tag_reset(addr)
>>  #define arch_kasan_get_tag(addr)       __tag_get(addr)
>> diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
>> index 0deb4887d6e9..df6c85f8f48f 100644
>> --- a/arch/x86/kernel/vmlinux.lds.S
>> +++ b/arch/x86/kernel/vmlinux.lds.S
>> @@ -353,6 +353,7 @@ SECTIONS
>>
>>         RUNTIME_CONST_VARIABLES
>>         RUNTIME_CONST(ptr, USER_PTR_MAX)
>> +       RUNTIME_CONST(ptr, KASAN_SHADOW_END_RC)
>>
>>         . = ALIGN(PAGE_SIZE);
>>
>> diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
>> index 299a2144dac4..5ca5862a5cd6 100644
>> --- a/arch/x86/mm/kasan_init_64.c
>> +++ b/arch/x86/mm/kasan_init_64.c
>> @@ -358,6 +358,9 @@ void __init kasan_init(void)
>>         int i;
>>
>>         memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
>> +       unsigned long KASAN_SHADOW_END_RC = pgtable_l5_enabled() ? 0xfff0000000000000 : 0xfffffc0000000000;
>
>I think defining these constants in arch/x86/include/asm/kasan.h is
>cleaner than hardcoding them here.
>

Okay, I'll change that.

>
>
>
>
>
>
>> +
>> +       runtime_const_init(ptr, KASAN_SHADOW_END_RC);
>>
>>         /*
>>          * We use the same shadow offset for 4- and 5-level paging to
>> @@ -372,7 +375,7 @@ void __init kasan_init(void)
>>          * bunch of things like kernel code, modules, EFI mapping, etc.
>>          * We need to take extra steps to not overwrite them.
>>          */
>> -       if (pgtable_l5_enabled()) {
>> +       if (pgtable_l5_enabled() && !IS_ENABLED(CONFIG_KASAN_SW_TAGS)) {
>>                 void *ptr;
>>
>>                 ptr = (void *)pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_END));
>> --
>> 2.47.1
>>

-- 
Kind regards
Maciej Wieczór-Retman




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux