On Thu, 2023-06-22 at 19:21 +0100, Matthew Wilcox wrote: > On Mon, Jun 12, 2023 at 05:10:42PM -0700, Rick Edgecombe wrote: > > +++ b/include/linux/mm.h > > @@ -342,7 +342,36 @@ extern unsigned int kobjsize(const void > > *objp); > > #endif /* CONFIG_ARCH_HAS_PKEYS */ > > > > #ifdef CONFIG_X86_USER_SHADOW_STACK > > -# define VM_SHADOW_STACK VM_HIGH_ARCH_5 /* Should not be set > > with VM_SHARED */ > > +/* > > + * This flag should not be set with VM_SHARED because of lack of > > support > > + * core mm. It will also get a guard page. This helps userspace > > protect > > + * itself from attacks. The reasoning is as follows: > > + * > > + * The shadow stack pointer(SSP) is moved by CALL, RET, and > > INCSSPQ. The > > + * INCSSP instruction can increment the shadow stack pointer. It > > is the > > + * shadow stack analog of an instruction like: > > + * > > + * addq $0x80, %rsp > > + * > > + * However, there is one important difference between an ADD on > > %rsp > > + * and INCSSP. In addition to modifying SSP, INCSSP also reads > > from the > > + * memory of the first and last elements that were "popped". It > > can be > > + * thought of as acting like this: > > + * > > + * READ_ONCE(ssp); // read+discard top element on stack > > + * ssp += nr_to_pop * 8; // move the shadow stack > > + * READ_ONCE(ssp-8); // read+discard last popped stack element > > + * > > + * The maximum distance INCSSP can move the SSP is 2040 bytes, > > before > > + * it would read the memory. Therefore a single page gap will be > > enough > > + * to prevent any operation from shifting the SSP to an adjacent > > stack, > > + * since it would have to land in the gap at least once, causing a > > + * fault. > > + * > > + * Prevent using INCSSP to move the SSP between shadow stacks by > > + * having a PAGE_SIZE guard gap. > > + */ > > +# define VM_SHADOW_STACK VM_HIGH_ARCH_5 > > #else > > # define VM_SHADOW_STACK VM_NONE > > #endif > > This is a lot of very x86-specific language in a generic header file. > I'm sure there's a better place for all this text. Yes, I couldn't find another place for it. This was the reasoning: https://lore.kernel.org/lkml/07deaffc10b1b68721bbbce370e145d8fec2a494.camel@xxxxxxxxx/ Did you have any particular place in mind?