On Sun, Nov 12, 2017 at 04:33:24PM -0800, Wanpeng Li wrote: > +static void kvm_flush_tlb_others(const struct cpumask *cpumask, > + const struct flush_tlb_info *info) > +{ > + u8 state; > + int cpu; > + struct kvm_steal_time *src; > + struct cpumask *flushmask = this_cpu_cpumask_var_ptr(__pv_tlb_mask); > + > + if (unlikely(!flushmask)) > + return; > + > + cpumask_copy(flushmask, cpumask); > + /* > + * We have to call flush only on online vCPUs. And > + * queue flush_on_enter for pre-empted vCPUs > + */ > + for_each_cpu(cpu, cpumask) { Should this not iterate flushmask? Its far too early to think, so I'm not sure this is an actual problem, but it does seem weird. > + src = &per_cpu(steal_time, cpu); > + state = READ_ONCE(src->preempted); > + if ((state & KVM_VCPU_PREEMPTED)) { > + if (try_cmpxchg(&src->preempted, &state, > + state | KVM_VCPU_SHOULD_FLUSH)) > + __cpumask_clear_cpu(cpu, flushmask); > + } > + } > + > + native_flush_tlb_others(flushmask, info); > +}