Am 18.10.2010 19:54, Steven Rostedt wrote: > On Mon, 2010-10-18 at 17:35 +0200, Jan Kiszka wrote: >> Am 18.10.2010 17:24, Steven Rostedt wrote: >>> On Mon, 2010-10-18 at 17:08 +0200, Jan Kiszka wrote: >>> >>>>> Just to summarize: >>>>> - on real HW with rtai-patch the issue is caused by rtai >>>>> - the vanilla kernel used with vmware has some other issues >>>> >>>> I missed the vanilla part: Can you reproduce under QEMU/KVM? Debugging >>>> the guest would be easier then, and it's easier to trace what that >>>> hypervisor does. Maybe there is a subtle race in the test code that is >>>> exposed by the timing that virtualization implies. >>> >>> Note, there was a bug fixed due to pv ops and kvm clock being traced: >>> >>> http://lkml.org/lkml/2010/9/22/455 >>> >>> The two fixes are next in that thread. >> >> Alternatively, this should avoid running into the code paths: >> >> qemu-system-x86_64 [...] -cpu kvm64,-kvmclock >> (ie. claim that we don't support kvmclock) > > Wouldn't that cause a slowdown in performance? If you have a stable host TSC, the guest will happily use it. If not, it will fall back to hpet or other more "heavy" clocksources. Still, you won't notice the difference under moderate workload. > Also, does this keep from using pvclock too? On KVM, kvmclock is the one and only pvclock provider. Jan
Attachment:
signature.asc
Description: OpenPGP digital signature