On 9/25/20 3:18 PM, Arvind Sankar wrote: > On Fri, Sep 25, 2020 at 10:56:43AM -0400, Ross Philipson wrote: >> On 9/24/20 1:38 PM, Arvind Sankar wrote: >>> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote: >>> >>>> diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S >>>> index 97d37f0..42043bf 100644 >>>> --- a/arch/x86/boot/compressed/head_64.S >>>> +++ b/arch/x86/boot/compressed/head_64.S >>>> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL) >>>> SYM_FUNC_END(efi32_stub_entry) >>>> #endif >>>> >>>> +#ifdef CONFIG_SECURE_LAUNCH >>>> +SYM_FUNC_START(sl_stub_entry) >>>> + /* >>>> + * On entry, %ebx has the entry abs offset to sl_stub_entry. To >>>> + * find the beginning of where we are loaded, sub off from the >>>> + * beginning. >>>> + */ >>> >>> This requirement should be added to the documentation. Is it necessary >>> or can this stub just figure out the address the same way as the other >>> 32-bit entry points, using the scratch space in bootparams as a little >>> stack? >> >> It is based on the state of the BSP when TXT vectors to the measured >> launch environment. It is documented in the TXT spec and the SDMs. >> > > I think it would be useful to add to the x86 boot documentation how > exactly this new entry point is called, even if it's just adding a link > to some section of those specs. The doc should also say that an > mle_header_offset of 0 means the kernel isn't secure launch enabled. Ok will do. > >>> >>> For the 32-bit assembler code that's being added, tip/master now has >>> changes that prevent the compressed kernel from having any runtime >>> relocations. You'll need to revise some of the code and the data >>> structures initial values to avoid creating relocations. >> >> Could you elaborate on this some more? I am not sure I see places in the >> secure launch asm that would be creating relocations like this. >> >> Thank you, >> Ross >> > > You should see them if you do > readelf -r arch/x86/boot/compressed/vmlinux > > In terms of the code, things like: > > addl %ebx, (sl_gdt_desc + 2)(%ebx) > > will create a relocation, because the linker interprets this as wanting > the runtime address of sl_gdt_desc, rather than just the offset from > startup_32. > > https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/tree/arch/x86/boot/compressed/head_64.S*n48__;Iw!!GqivPVa7Brio!JpZWv1cCPZdjD2jbCCGT7P9UIVl_lhX7YjckAnUcvi927jwZI7X3nX0MpIAZOyktJds$ > > has a comment with some explanation and a macro that the 32-bit code in > startup_32 uses to avoid creating relocations. > > Since the SL code is in a different assembler file (and a different > section), you can't directly use the same macro. I would suggest getting > rid of sl_stub_entry and entering directly at sl_stub, and then the code > in sl_stub.S can use sl_stub for the base address, defining the rva() > macro there as > > #define rva(X) ((X) - sl_stub) > > You will also need to avoid initializing data with symbol addresses. > > .long mle_header > .long sl_stub_entry > .long sl_gdt > > will create relocations. The third one is easy, just replace it with > sl_gdt - sl_gdt_desc and initialize it at runtime with > > leal rva(sl_gdt_desc)(%ebx), %eax > addl %eax, 2(%eax) > lgdt (%eax) > > The other two are more messy, unfortunately there is no easy way to tell > the linker what we want here. The other entry point addresses (for the > EFI stub) are populated in a post-processing step after the compressed > kernel has been linked, we could teach it to also update kernel_info. > > Without that, for kernel_info, you could change it to store the offset > of the MLE header from kernel_info, instead of from the start of the > image. > > For the MLE header, it could be moved to .head.text in head_64.S, and > initialized with > .long rva(sl_stub) > This will also let it be placed at a fixed offset from startup_32, so > that kernel_info can just be populated with a constant. Thank you for the detailed reply. I am going to start digging into this now. Ross >