Re: [PATCH] Documentation: KVM: Describe guest TSC scaling in migration algorithm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 23, 2022 at 12:35:17PM +0000, David Woodhouse wrote:
> On Tue, 2022-03-22 at 21:53 +0000, Oliver Upton wrote:
> > But what happens to CLOCK_MONOTONIC in this case? We are still accepting
> > the fact that live migrations destroy CLOCK_MONOTONIC if we directly
> > advance the guest TSCs to account for elapsed time. The definition of
> > CLOCK_MONOTONIC is that the clock does not count while the system is
> > suspended. From the viewpoint of the guest, a live migration appears to
> > be a forced suspend operation at an arbitrary instruction boundary.
> > There is no realistic way for the guest to give the illusion that
> > MONOTONIC has stopped without help from the hypervisor.
> 
> I'm a little lost there. CLOCK_MONOTONIC is *permitted* to stop when
> the guest is suspended, but it's not *mandatory*, surely?
> 
> I can buy your assertion that the brownout period of a live migration
> (or the time for the host kernel to kexec in the case of live update) 
> can be considered a suspend... but regardless of whether that makes it
> mandatory to stop the clocks, I prefer to see it a different way. 
> 
> In normal operation — especially with CPU overcommit and/or throttling
> — there are times when none of the guest's vCPUs will be scheduled for
> short periods of time. We don't *have* to stop CLOCK_MONOTONIC when
> that happens, do we?
> 

You're absolutely right. We've at least accepted MONOTONIC behaving the
way it does for time lost to host scheduling, and expose this through
steal_time for the guest scheduler.

> If we want live migration to be guest-transparent, shouldn't we treat
> is as much as possible as one of those times when the vCPUs just happen
> not to be running for a moment?

There is still a subtle difference between host scheduler pressure and
live migration. Its hard to crisply state whether or not the VM is
actually suspended, as any one of its vCPU threads could actually be
running. Migration is one of those events where we positively know the
guest isn't running at all.

> On a live update, where the host does a kexec and then resumes the
> guest state, the host TSC reference is precisely the same as before the
> upate. We basically don't want to change *anything* that the guest sees
> in its pvclock information. In fact, we have a local patch to
> 'KVM_SET_CLOCK_FROM_GUEST' for the live update case, which ensures
> exactly that. We then add a delta to the guest TSC as we create each
> vCPU in the new KVM; the *offset* interface would be beneficial to us
> here (since that offset doesn't change) but we're not using it yet.
> 
> For live migration, the same applies — we can just add a delta to the
> clock and the guest TSC values, commensurate with the amount of
> wallclock time that elapsed from serialisation on the source host, to
> deserialisation on the destination.
> 
> And it all looks *just* like it would if the vCPUs happened not to be
> scheduled for a little while, because the host was busy.

We could continue to get away with TSC advancement, but the critical
part IMO is the upper bound. And what happens when we exceed it.

There is no authoritative documentation around what time looks like as a
guest of KVM, and futhermore what happens when a guest experiences time
travel. Now we're in a particularly undesirable situation where there
are at least three known definitions for time during a migration
(upstream QEMU, Google, Amazon) and it is ~impossible to program guest
software to anticipate our shenanigans.

If we are to do this right we probably need to agree on documented
behavior. If we decide that advancing TSCs is acceptable up to 'X'
seconds, guest kernels could take a change to relax expectations at
least up to this value.

> > > The KVM_PVCLOCK_STOPPED event should trigger a change in some of the
> > > globals kept by kernel/time/ntp.c (which are visible to userspace through
> > > adjtimex(2)). In particular, `time_esterror` and `time_maxerror` should get reset
> > > to `NTP_PHASE_LIMIT` and time_status should get reset to `STA_UNSYNC`.
> > 
> > I do not disagree that NTP needs to throw the book out after a live
> > migration.
> > 
> > But, the issue is how we convey that to the guest. KVM_PVCLOCK_STOPPED
> > relies on the guest polling a shared structure, and who knows when the
> > guest is going to check the structure again? If we inject an interrupt
> > the guest is likely to check this state in a reasonable amount of time.
> 
> Ah, but that's the point. A flag in shared memory can be checked
> whenever the guest needs to know that it's operating on valid state.
> Linux will check it *every* time from pvclock_clocksource_read().
> 
> As opposed to a separate interrupt which eventually gets processed some
> indefinite amount of time in the future.

There are a few annoying things with pvclock, though. It is a per-vCPU
structure, so special care must be taken to act exactly once on a
migration. Also, since commit 7539b174aef4 ("x86: kvmguest: use TSC
clocksource if invariant TSC is exposed") the guest kernel could pick
the TSC over the pvclock by default, so its hard to say when the pvclock
structure is checked again. This is what I had in mind when suggesting a
doorbell is needed, as there is no good way to know what clocksource the
guest is using.

> > Doing this the other way around (advance the TSC, tell the guest to fix
> > MONOTONIC) is fundamentally wrong, as it violates two invariants of the
> > monotonic clock. Monotonic counts during a migration, which really is a
> > forced suspend. Additionally, you cannot step the monotonic clock.
> > 
> I dont understand why we can't "step the monotonic clock". Any time
> merely refrain from scheduling the vCPUs for any period of time, that
> is surely indistinguishable from a "step" in the monotonic clock,
> surely?

Right, there is some nebulous threshold that we've defined as an
acceptable amount of time to 'step' the monotonic clock. I think that
everything to date is built around the assumption that it is a small
amount of time of O(timeslice). Pinning down the upper bound should at
least make clocks more predictable on virtualization.

> > Sorry to revisit this conversation yet again. Virtualization isn't going
> > away any time soon and the illusion that migrations are invisible to the
> > guest is simply not true.
> 
> I'll give you the assertion that migrations aren't completely
> invisible, but I still think they should be *equivalent* to the vCPU
> just not being scheduled for a moment.

I sure hope that migrations are fast enough that it is indistinguishable
from scheduler pressure. I think the situations where that is not the
case are particularly interesting. Defining a limit and having a
mechanism for remedial action could make things more predictable for
guest software.

But agree, and shame on us for the broken virtual hardware when that
isn't the case :-)

--
Thanks,
Oliver




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux