Re: [PATCH v7 4/6] x86/entry: Enable random_kstack_offset support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Kees Cook <keescook@xxxxxxxxxxxx> wrote:

> Allow for a randomized stack offset on a per-syscall basis, with roughly
> 5-6 bits of entropy, depending on compiler and word size. Since the
> method of offsetting uses macros, this cannot live in the common entry
> code (the stack offset needs to be retained for the life of the syscall,
> which means it needs to happen at the actual entry point).

>  __visible noinstr void do_syscall_64(unsigned long nr, struct pt_regs *regs)
>  {
> +	add_random_kstack_offset();
>  	nr = syscall_enter_from_user_mode(regs, nr);

> @@ -83,6 +84,7 @@ __visible noinstr void do_int80_syscall_32(struct pt_regs *regs)
>  {
>  	unsigned int nr = syscall_32_enter(regs);
>  
> +	add_random_kstack_offset();

>  	unsigned int nr = syscall_32_enter(regs);
>  	int res;
>  
> +	add_random_kstack_offset();

> @@ -70,6 +71,13 @@ static inline void arch_exit_to_user_mode_prepare(struct pt_regs *regs,
>  	 */
>  	current_thread_info()->status &= ~(TS_COMPAT | TS_I386_REGS_POKED);
>  #endif
> +
> +	/*
> +	 * x86_64 stack alignment means 3 bits are ignored, so keep
> +	 * the top 5 bits. x86_32 needs only 2 bits of alignment, so
> +	 * the top 6 bits will be used.
> +	 */
> +	choose_random_kstack_offset(rdtsc() & 0xFF);
>  }

1)

Wondering why the calculation of the kstack offset (which happens in 
every syscall) is separated from the entry-time logic and happens 
during return to user-space?

The two methods:

+#define add_random_kstack_offset() do {                                        \
+       if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
+                               &randomize_kstack_offset)) {            \
+               u32 offset = this_cpu_read(kstack_offset);              \
+               u8 *ptr = __builtin_alloca(offset & 0x3FF);             \
+               asm volatile("" : "=m"(*ptr) :: "memory");              \
+       }                                                               \
+} while (0)
+
+#define choose_random_kstack_offset(rand) do {                         \
+       if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
+                               &randomize_kstack_offset)) {            \
+               u32 offset = this_cpu_read(kstack_offset);              \
+               offset ^= (rand);                                       \
+               this_cpu_write(kstack_offset, offset);                  \
+       }                                                               \
+} while (0)

choose_random_kstack_offset() basically calculates the offset and 
stores it in a percpu variable (mixing it with the previous offset 
value), add_random_kstack_offset() uses it in an alloca() dynamic 
stack allocation.

Wouldn't it be (slightly) lower combined overhead to just do it in a 
single step? There would be duplication along the 3 syscall entry 
points, but this should be marginal as this looks small, and the entry 
points would probably be cache-hot.

2)

Another detail I noticed: add_random_kstack_offset() limits the offset 
to 0x3ff, or 1k - 10 bits.

But the RDTSC mask is 0xff, 8 bits:

+       /*
+        * x86_64 stack alignment means 3 bits are ignored, so keep
+        * the top 5 bits. x86_32 needs only 2 bits of alignment, so
+        * the top 6 bits will be used.
+        */
+       choose_random_kstack_offset(rdtsc() & 0xFF);

alloca() itself works in byte units and will round the allocation to 8 
bytes on x86-64, to 4 bytes on x86-32, this is what the 'ignored bits' 
reference in the comment is to, right?

Why is there a 0x3ff mask for the alloca() call and a 0xff mask to the 
RDTSC randomizing value? Shouldn't the two be synced up? Or was the 
intention to shift the RDTSC value to the left by 3 bits?

3)

Finally, kstack_offset is a percpu variable:

  #ifdef CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
  ...
  DEFINE_PER_CPU(u32, kstack_offset);

This is inherited across tasks on scheduling, and new syscalls will 
mix in new RDTSC values to continue to randomize the offset.

Wouldn't it make sense to further mix values into this across context 
switching boundaries? A really inexpensive way would be to take the 
kernel stack value and mix it into the offset, and maybe even the 
randomized t->stack_canary value?

This would further isolate the syscall kernel stack offsets of 
separate execution contexts from each other, should an attacker find a 
way to estimate or influence likely RDTSC values.

Thanks,

	Ingo



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux