Re: Coverity: emulator_leave_smm(): Error handling issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 01, 2022, coverity-bot wrote:
> Hello!
> 
> This is an experimental semi-automated report about issues detected by
> Coverity from a scan of next-20221201 as part of the linux-next scan project:
> https://scan.coverity.com/projects/linux-next-weekly-scan
> 
> You're getting this email because you were associated with the identified
> lines of code (noted below) that were touched by commits:
> 
>   Wed Nov 9 12:31:18 2022 -0500
>     1d0da94cdafe ("KVM: x86: do not go through ctxt->ops when emulating rsm")
> 
> Coverity reported the following:
> 
> *** CID 1527763:  Error handling issues  (CHECKED_RETURN)
> arch/x86/kvm/smm.c:631 in emulator_leave_smm()
> 625     		cr4 = kvm_read_cr4(vcpu);
> 626     		if (cr4 & X86_CR4_PAE)
> 627     			kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PAE);
> 628
> 629     		/* And finally go back to 32-bit mode.  */
> 630     		efer = 0;
> vvv     CID 1527763:  Error handling issues  (CHECKED_RETURN)
> vvv     Calling "kvm_set_msr" without checking return value (as is done elsewhere 5 out of 6 times).
> 631     		kvm_set_msr(vcpu, MSR_EFER, efer);
> 632     	}
> 633     #endif
> 634
> 635     	/*
> 636     	 * Give leave_smm() a chance to make ISA-specific changes to the vCPU
> 
> If this is a false positive, please let us know so we can mark it as

It's not a false positive per se, but absent a KVM bug the call can never fail.
Ditto for the kvm_set_cr{0,4}() calls above.  That said, I'm tempted to "fix"
these since we've had bugs related to this code in the past.  This doesn't seem
too ugly...

diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c
index a9c1c2af8d94..621e39689bff 100644
--- a/arch/x86/kvm/smm.c
+++ b/arch/x86/kvm/smm.c
@@ -601,8 +601,9 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt)
 
                /* Zero CR4.PCIDE before CR0.PG.  */
                cr4 = kvm_read_cr4(vcpu);
-               if (cr4 & X86_CR4_PCIDE)
-                       kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PCIDE);
+               if (cr4 & X86_CR4_PCIDE &&
+                   WARN_ON_ONCE(kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PCIDE)))
+                       return X86EMUL_UNHANDLEABLE;
 
                /* A 32-bit code segment is required to clear EFER.LMA.  */
                memset(&cs_desc, 0, sizeof(cs_desc));
@@ -614,8 +615,9 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt)
 
        /* For the 64-bit case, this will clear EFER.LMA.  */
        cr0 = kvm_read_cr0(vcpu);
-       if (cr0 & X86_CR0_PE)
-               kvm_set_cr0(vcpu, cr0 & ~(X86_CR0_PG | X86_CR0_PE));
+       if (cr0 & X86_CR0_PE &&
+           WARN_ON_ONCE(kvm_set_cr0(vcpu, cr0 & ~(X86_CR0_PG | X86_CR0_PE))))
+               return X86EMUL_UNHANDLEABLE;
 
 #ifdef CONFIG_X86_64
        if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) {
@@ -623,12 +625,14 @@ int emulator_leave_smm(struct x86_emulate_ctxt *ctxt)
 
                /* Clear CR4.PAE before clearing EFER.LME. */
                cr4 = kvm_read_cr4(vcpu);
-               if (cr4 & X86_CR4_PAE)
-                       kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PAE);
+               if ((cr4 & X86_CR4_PAE &&
+                   WARN_ON_ONCE(kvm_set_cr4(vcpu, cr4 & ~X86_CR4_PAE)))
+                       return X86EMUL_UNHANDLEABLE;
 
                /* And finally go back to 32-bit mode.  */
                efer = 0;
-               kvm_set_msr(vcpu, MSR_EFER, efer);
+               if (WARN_ON_ONCE(kvm_set_msr(vcpu, MSR_EFER, efer)))
+                       return X86EMUL_UNHANDLEABLE;
        }
 #endif
 




[Index of Archives]     [Linux Kernel]     [Linux USB Development]     [Yosemite News]     [Linux SCSI]

  Powered by Linux