2017-05-11 20:10 GMT+08:00 Paolo Bonzini <pbonzini@xxxxxxxxxx>: > > > On 11/05/2017 14:00, Wanpeng Li wrote: >> From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> >> >> BUG: using __this_cpu_read() in preemptible [00000000] code: qemu-system-x86/2809 >> caller is __this_cpu_preempt_check+0x13/0x20 >> CPU: 2 PID: 2809 Comm: qemu-system-x86 Not tainted 4.11.0+ #13 >> Call Trace: >> dump_stack+0x99/0xce >> check_preemption_disabled+0xf5/0x100 >> __this_cpu_preempt_check+0x13/0x20 >> get_kvmclock_ns+0x6f/0x110 [kvm] >> get_time_ref_counter+0x5d/0x80 [kvm] >> kvm_hv_process_stimers+0x2a1/0x8a0 [kvm] >> ? kvm_hv_process_stimers+0x2a1/0x8a0 [kvm] >> ? kvm_arch_vcpu_ioctl_run+0xac9/0x1ce0 [kvm] >> kvm_arch_vcpu_ioctl_run+0x5bf/0x1ce0 [kvm] >> kvm_vcpu_ioctl+0x384/0x7b0 [kvm] >> ? kvm_vcpu_ioctl+0x384/0x7b0 [kvm] >> ? __fget+0xf3/0x210 >> do_vfs_ioctl+0xa4/0x700 >> ? __fget+0x114/0x210 >> SyS_ioctl+0x79/0x90 >> entry_SYSCALL_64_fastpath+0x23/0xc2 >> RIP: 0033:0x7f9d164ed357 >> RSP: 002b:00007f9d0f6768f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 >> RAX: ffffffffffffffda RBX: ffffffffa64d53c3 RCX: 00007f9d164ed357 >> RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 000000000000000d >> RBP: ffffbb260856bf88 R08: 0000556b2a13eeb0 R09: 0000000000000000 >> R10: 00007f9d080000c8 R11: 0000000000000246 R12: 0000000000000000 >> R13: 00007f9d1853d000 R14: 0000000000000000 R15: 000000000000ae80 >> ? __this_cpu_preempt_check+0x13/0x20 >> >> This can be reproduced by run kvm-unit-tests/hyperv_stimer.flat w/ >> CONFIG_PREEMPT and CONFIG_DEBUG_PREEMPT enabled. >> >> Safe access to per-CPU data requires a couple of constraints, though: the >> thread working with the data cannot be preempted and it cannot be migrated >> while it manipulates per-CPU variables. If the thread is preempted, the >> thread that replaces it could try to work with the same variables; migration >> to another CPU could also cause confusion. However there is no preemption >> disable when reads host per-CPU tsc rate to calculate the current kvmclock >> timestamp. >> >> This patch fix it by holding pvclock_gtod_sync_lock lock when calculates >> pvclock's time scale in order to disable preemption for host per-CPU tsc >> rate read. >> >> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> >> Cc: Radim Krčmář <rkrcmar@xxxxxxxxxx> >> Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx> >> --- >> arch/x86/kvm/x86.c | 2 +- >> 1 file changed, 1 insertion(+), 1 deletion(-) >> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> index b54125b..8008d56 100644 >> --- a/arch/x86/kvm/x86.c >> +++ b/arch/x86/kvm/x86.c >> @@ -1772,11 +1772,11 @@ u64 get_kvmclock_ns(struct kvm *kvm) >> >> hv_clock.tsc_timestamp = ka->master_cycle_now; >> hv_clock.system_time = ka->master_kernel_ns + ka->kvmclock_offset; >> - spin_unlock(&ka->pvclock_gtod_sync_lock); >> >> kvm_get_time_scale(NSEC_PER_SEC, __this_cpu_read(cpu_tsc_khz) * 1000LL, >> &hv_clock.tsc_shift, >> &hv_clock.tsc_to_system_mul); >> + spin_unlock(&ka->pvclock_gtod_sync_lock); >> return __pvclock_read_cycles(&hv_clock, rdtsc()); >> } >> >> > > This would not be enough for PREEMPT_RT. You need to use > get_cpu/put_cpu (including __pvclock_read_cycles in the non-preemptable > section). Actually the splat is for __this_cpu_read(cpu_tsc_khz), so I just protect it. Regards, Wanpeng Li