On 5/29/24 03:47, Nikolay Borisov wrote:
diff --git a/arch/x86/kernel/relocate_kernel_64.S
b/arch/x86/kernel/relocate_kernel_64.S
index 56cab1bb25f5..085eef5c3904 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -148,9 +148,10 @@ SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
*/
movl $X86_CR4_PAE, %eax
testq $X86_CR4_LA57, %r13
- jz 1f
+ jz .Lno_la57
orl $X86_CR4_LA57, %eax
-1:
+.Lno_la57:
+
movq %rax, %cr4
jmp 1f
That jmp 1f becomes redundant now as it simply jumps 1 line below.
Uh... am I the only person to notice that ALL that is needed here is:
andl $(X86_CR4_PAE|X86_CR4_LA57), %r13d
movq %r13, %rax
... since %r13 is dead afterwards, and PAE *will* have been set in %r13
already?
I don't believe that this specific jmp is actually needed -- there are
several more synchronizing jumps later -- but it doesn't hurt.
However, if the effort is for improving the readability, it might be
worthwhile to encapsulate the "jmp 1f; 1:" as a macro, e.g. "SYNC_CODE".
-hpa