From: Rong Tao <rongtao@xxxxxxxx> Code indentation should use tabs where possible and miss a '*'. Signed-off-by: Rong Tao <rongtao@xxxxxxxx> --- v2: KVM: VMX: for case-insensitive searches v1: https://lore.kernel.org/lkml/tencent_768ACEEBE1E803E29F4191906956D065B806@xxxxxx/ --- arch/x86/kvm/vmx/vmenter.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 8477d8bdd69c..f09e3aaab102 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -229,7 +229,7 @@ SYM_INNER_LABEL(vmx_vmexit, SYM_L_GLOBAL) * eIBRS has its own protection against poisoned RSB, so it doesn't * need the RSB filling sequence. But it does need to be enabled, and a * single call to retire, before the first unbalanced RET. - */ + */ FILL_RETURN_BUFFER %_ASM_CX, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT,\ X86_FEATURE_RSB_VMEXIT_LITE @@ -273,7 +273,7 @@ SYM_FUNC_END(__vmx_vcpu_run) * vmread_error_trampoline - Trampoline from inline asm to vmread_error() * @field: VMCS field encoding that failed * @fault: %true if the VMREAD faulted, %false if it failed - + * * Save and restore volatile registers across a call to vmread_error(). Note, * all parameters are passed on the stack. */ -- 2.39.0