On Mon, 2020-01-13 at 13:01 +0000, Andrew Cooper wrote: > On 13/01/2020 11:43, Singh, Balbir wrote: > > On Mon, 2020-01-13 at 11:16 +0100, Peter Zijlstra wrote: > > > On Fri, Jan 10, 2020 at 07:35:20AM -0800, Eduardo Valentin wrote: > > > > Hey Peter, > > > > > > > > On Wed, Jan 08, 2020 at 11:50:11AM +0100, Peter Zijlstra wrote: > > > > > On Tue, Jan 07, 2020 at 11:45:26PM +0000, Anchal Agarwal wrote: > > > > > > From: Eduardo Valentin <eduval@xxxxxxxxxx> > > > > > > > > > > > > System instability are seen during resume from hibernation when system > > > > > > is under heavy CPU load. This is due to the lack of update of sched > > > > > > clock data, and the scheduler would then think that heavy CPU hog > > > > > > tasks need more time in CPU, causing the system to freeze > > > > > > during the unfreezing of tasks. For example, threaded irqs, > > > > > > and kernel processes servicing network interface may be delayed > > > > > > for several tens of seconds, causing the system to be unreachable. > > > > > > The fix for this situation is to mark the sched clock as unstable > > > > > > as early as possible in the resume path, leaving it unstable > > > > > > for the duration of the resume process. This will force the > > > > > > scheduler to attempt to align the sched clock across CPUs using > > > > > > the delta with time of day, updating sched clock data. In a post > > > > > > hibernation event, we can then mark the sched clock as stable > > > > > > again, avoiding unnecessary syncs with time of day on systems > > > > > > in which TSC is reliable. > > > > > > > > > > This makes no frigging sense what so bloody ever. If the clock is > > > > > stable, we don't care about sched_clock_data. When it is stable you get > > > > > a linear function of the TSC without complicated bits on. > > > > > > > > > > When it is unstable, only then do we care about the sched_clock_data. > > > > > > > > > > > > > Yeah, maybe what is not clear here is that we covering for situation > > > > where clock stability changes over time, e.g. at regular boot clock is > > > > stable, hibernation happens, then restore happens in a non-stable clock. > > > > > > Still confused, who marks the thing unstable? The patch seems to suggest > > > you do yourself, but it is not at all clear why. > > > > > > If TSC really is unstable, then it needs to remain unstable. If the TSC > > > really is stable then there is no point in marking is unstable. > > > > > > Either way something is off, and you're not telling me what. > > > > > > > Hi, Peter > > > > For your original comment, just wanted to clarify the following: > > > > 1. After hibernation, the machine can be resumed on a different but compatible > > host (these are VM images hibernated) > > 2. This means the clock between host1 and host2 can/will be different > > The guests TSC value is part of all save/migrate/resume state. Given > this bug, I presume you've actually discarded all register state on > hibernate, and the TSC is starting again from 0? Right. This is a guest-driven suspend to disk, followed by starting up later on a different — but identical — host. There is no guest state being saved as part of a Xen save/restore. > The frequency of the new TSC might very likely be different, but the > scale/offset in the paravirtual clock information should let Linux's > view of time stay consistent. The frequency as seen by the guest really needs to be the same. That hibernated instance may only be booted again on a host which would have been suitable for live migration. That's either because the TSC frequency *is* the same, or with TSC scaling to make it appear that way. If the environment doesn't provide that then all bets are off and we shouldn't be trying to hack around it in the guest kernel. Across the hibernation we do expect a single step change in the TSC value, just as on real hardware. Like Peter, I assume that the resume code does cope with that but haven't checked precisely how/where it does so.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature