On Wed, Jan 12, 2022, Li RongQing wrote: > After support paravirtualized TLB shootdowns, steal_time.preempted > includes not only KVM_VCPU_PREEMPTED, but also KVM_VCPU_FLUSH_TLB > > and kvm_vcpu_is_preempted should test only with KVM_VCPU_PREEMPTED > > Fixes: 858a43aae2367 ("KVM: X86: use paravirtualized TLB Shootdown") > Signed-off-by: Li RongQing <lirongqing@xxxxxxxxx> > --- > arch/x86/kernel/kvm.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 59abbda..a9202d9 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -1025,8 +1025,8 @@ asm( > ".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" > "__raw_callee_save___kvm_vcpu_is_preempted:" > "movq __per_cpu_offset(,%rdi,8), %rax;" > -"cmpb $0, " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax);" > -"setne %al;" > +"movb " __stringify(KVM_STEAL_TIME_preempted) "+steal_time(%rax), %al;" > +"andb $" __stringify(KVM_VCPU_PREEMPTED) ", %al;" Eww, the existing code is sketchy. It relies on the compiler to store _Bool/bool in a single byte since %rax may be non-zero from the __per_cpu_offset(), and modifying %al doesn't zero %rax[63:8]. I doubt gcc or clang use anything but a single byte on x86-64, but "andl" is just as cheap so I don't see any harm in being paranoid. > "ret;" > ".size __raw_callee_save___kvm_vcpu_is_preempted, .-__raw_callee_save___kvm_vcpu_is_preempted;" > ".popsection"); > -- > 2.9.4 >