On 04/16/2010 10:36 AM, Jeremy Fitzhardinge wrote:
On 04/15/2010 11:37 AM, Glauber Costa wrote:
In recent stress tests, it was found that pvclock-based systems
could seriously warp in smp systems. Using ingo's time-warp-test.c,
I could trigger a scenario as bad as 1.5mi warps a minute in some systems.
Is that "1.5 million"?
(to be fair, it wasn't that bad in most of them). Investigating further, I
found out that such warps were caused by the very offset-based calculation
pvclock is based on.
Is the problem that the tscs are starting out of sync, or that they're
drifting relative to each other over time? Do the problems become worse
the longer the uptime? How large are the offsets we're talking about here?
This is one source of the problem, but the same thing happens at many
levels... tsc may start out of sync, drift between sockets, be badly
re-calibrated by the BIOS, etc... the issue persists even if the TSCs
are perfectly in sync - the measurement of them is not.
So reading TSC == 100,000 units at time A and then waiting 10 units, one
may read TSC == 100,010 +/- 5 units because the code stream is not
perfectly serialized - nor can it be. There will always be some amount
of error unless running in perfect lock-step, which only happens in a
simulator.
This inherent measurement error can cause apparent time to go backwards
when measured simultaneously across multiple CPUs, or when
re-calibrating against an external clocksource. Combined with other
factors as above, it can be of sufficient magnitude to be noticed. KVM
clock is particularly exposed to the problem because the TSC is measured
and recalibrated for each virtual CPU whenever there is a physical CPU
switch, so micro-adjustments forwards and backwards may occur during the
recalibration - and appear as a real backwards time warp to the guest.
I have some patches to fix that issue, but the SMP problem remains to be
fixed - and is addressed quite thoroughly by this patch.
Zach
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html