On 13/06/2016 12:33, Alan Jenkins wrote: > > I did some more thinking since. Page-long justifications seem > sub-optimal. I agree with your point, and I'm looking in to it. (Though > I think it's around 10 microseconds. 40 cpu cycles sounds very short :-P). Yeah. It's actually less than 10 (more like 4-7), still I was 3 orders of magnitude off. :) > I thought it would be most natural to call wait_lapic_expire() before > returning to userspace. Then we avoid returning with > expired_tscdeadline set, and userspace doesn't see the irq injected > before the deadline. > > So I checked the pre-conditions for wait_lapic_expire() and tripped over > another issue. > > I don't think it likes tsc_catchup=1, even where it's called now. I agree that whenever the TSC becomes unstable the pre-expiration feature should be turned off (and the hrtimer canceled and reset, similar to kvm_set_lapic_tscdeadline_msr). > - Don't support it? Disable lapic_timer_advance_ns on systems without > perfectly synchronized TSCs? Has no-one noticed because no such systems > have been configured? Or does it escape their validation tests and this > change could be perceived as a regression? This feature is meant for real-time systems, you can expect much more than just synchronized TSCs :) (for example you can expect SMIs to take a very short and bounded time). This should be a separate patch though. Are you going to post v2 for this one? Paolo > - Hack tsc_catchup to take the hrtimer expiry into account? Disable > busy-waits only for the old cpus lacking X86_FEATURE_CONSTANT_TSC? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html