Re: [PATCH 06/16] x86/boot: Setup memory protection for bzImage code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-10-19 10:17, Ard Biesheuvel wrote:
On Tue, 6 Sept 2022 at 12:42, Evgeniy Baskov <baskov@xxxxxxxxx> wrote:

Use previously added code to use 4KB pages for mapping. Map compressed
and uncompressed kernel with appropriate memory protection attributes.
For compressed kernel set them up manually. For uncompressed kernel
used flags specified in ELF header.

Signed-off-by: Evgeniy Baskov <baskov@xxxxxxxxx>
...

 /*
  * Locally defined symbols should be marked hidden:
@@ -578,6 +578,7 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
        pushq   %rsi
        call    load_stage2_idt

+       call    startup32_enable_nx_if_supported
        /* Pass boot_params to initialize_identity_maps() */
        movq    (%rsp), %rdi
        call    initialize_identity_maps
@@ -602,6 +603,28 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
        jmp     *%rax
 SYM_FUNC_END(.Lrelocated)

+SYM_FUNC_START_LOCAL_NOALIGN(startup32_enable_nx_if_supported)

Why the startup32_ prefix for this function name?

Oh, right there is no reasons, I will remove it.
...
 /*
  * Adds the specified range to the identity mappings.
  */
-void kernel_add_identity_map(unsigned long start, unsigned long end)
+unsigned long kernel_add_identity_map(unsigned long start,
+                                     unsigned long end,
+                                     unsigned int flags)
 {
        int ret;

        /* Align boundary to 2M. */
-       start = round_down(start, PMD_SIZE);
-       end = round_up(end, PMD_SIZE);
+       start = round_down(start, PAGE_SIZE);
+       end = round_up(end, PAGE_SIZE);
        if (start >= end)
-               return;
+               return start;
+
+ /* Enforce W^X -- just stop booting with error on violation. */
+       if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) &&
+ (flags & (MAP_EXEC | MAP_WRITE)) == (MAP_EXEC | MAP_WRITE))
+               error("Error: W^X violation\n");
+

Do we need to add a new failure mode here?

It seems reasonable to me to leave it here to avoid unintentionally introducing
RWX mappings. And this function can already fail on OOM situation.
I can change it to warning if failure is too harsh in this situation.

+       bool nx = !(flags & MAP_EXEC) && has_nx;
+       bool ro = !(flags & MAP_WRITE);
+
...
- kernel_add_identity_map((unsigned long)_head, (unsigned long)_end);
-       boot_params = rmode;
- kernel_add_identity_map((unsigned long)boot_params, (unsigned long)(boot_params + 1));
+       extern char _head[], _ehead[];

Please move these extern declarations out of the function scope (at
the very least)

I will move it to misc.h then, there are already some of these declarations present.


+       kernel_add_identity_map((unsigned long)_head,
+ (unsigned long)_ehead, MAP_EXEC | MAP_NOFLUSH);
+
+       extern char _compressed[], _ecompressed[];
+       kernel_add_identity_map((unsigned long)_compressed,
+ (unsigned long)_ecompressed, MAP_WRITE | MAP_NOFLUSH);
+
+       extern char _text[], _etext[];
+       kernel_add_identity_map((unsigned long)_text,
+ (unsigned long)_etext, MAP_EXEC | MAP_NOFLUSH);
+
+       extern char _rodata[], _erodata[];
+       kernel_add_identity_map((unsigned long)_rodata,
+                               (unsigned long)_erodata, MAP_NOFLUSH);
+

Same question as before: do we really need three different regions for
rodata+text here?

As I already told, I think, its undesirable to leave compressed kernel blob (and .rodata) executable, as it it will provide higher attack surface if some control flow interception vulnerability in this code would be discovered, and though I am not aware of such vulnerabilities to be present currently, I think, additional security is not redundant, since it can be provided almost
for free.

I can merge these regions, if you think it does not worth it.


...
+                       /*
+ * Simultaneously readable and writable segments are + * violating W^X, and should not be present in vmlinux image.
+                        */
+ if ((phdr->p_flags & (PF_X | PF_W)) == (PF_X | PF_W)) + error("W^X violation for ELF segment");
+

Can we catch this at build time instead?

Thanks, thats great idea! I will implement that in tools/build.c


...
+#else
+static inline unsigned long kernel_add_identity_map(unsigned long start,
+                                                   unsigned long end,
+ unsigned int flags)
+{
+       (void)flags;
+       (void)end;

Why these (void) casts? Can we just drop them?


Unused parameters used to cause warnings for me here somehow...
I will drop them.



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux