On Mon, Mar 30, 2020 at 12:25:36PM +0100, Mark Rutland wrote: > On Tue, Mar 24, 2020 at 01:32:29PM -0700, Kees Cook wrote: > > +/* > > + * Do not use this anywhere else in the kernel. This is used here because > > + * it provides an arch-agnostic way to grow the stack with correct > > + * alignment. Also, since this use is being explicitly masked to a max of > > + * 10 bits, stack-clash style attacks are unlikely. For more details see > > + * "VLAs" in Documentation/process/deprecated.rst > > + */ > > +void *__builtin_alloca(size_t size); > > + > > +#define add_random_kstack_offset() do { \ > > + if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \ > > + &randomize_kstack_offset)) { \ > > + u32 offset = this_cpu_read(kstack_offset); \ > > + char *ptr = __builtin_alloca(offset & 0x3FF); \ > > + asm volatile("" : "=m"(*ptr)); \ > > Is this asm() a homebrew OPTIMIZER_HIDE_VAR(*ptr)? If the asm > constraints generate metter code, could we add those as alternative > constraints in OPTIMIZER_HIDE_VAR() ? Er, no, sorry, not the same. I disassembled the wrong binary. :) With asm volatile("" : "=m"(*ptr)) ffffffff810038bc: 48 8d 44 24 0f lea 0xf(%rsp),%rax ffffffff810038c1: 48 83 e0 f0 and $0xfffffffffffffff0,%rax With __asm__ ("" : "=r" (var) : "0" (var)) ffffffff810038bc: 48 8d 54 24 0f lea 0xf(%rsp),%rdx ffffffff810038c1: 48 83 e2 f0 and $0xfffffffffffffff0,%rdx ffffffff810038c5: 0f b6 02 movzbl (%rdx),%eax ffffffff810038c8: 88 02 mov %al,(%rdx) It looks like OPTIMIZER_HIDE_VAR() is basically just: var = var; In the former case, we avoid the write and retain the allocation. So I think don't think OPTIMIZER_HIDE_VAR() should be used here, nor should OPTIMIZER_HIDE_VAR() be changed to remove the "0" (var) bit. -- Kees Cook