When steal time exceeds the measured delta when updating clock_task, we currently try to catch up the excess in future updates. However, this results in inaccurate run times for the future things using clock_task, as they end up getting additional steal time that did not actually happen. For example, suppose a task in a VM runs for 10ms and had 15ms of steal time reported while it ran. clock_task rightly doesn't advance. Then, a different taks runs on the same rq for 10ms without any time stolen in the host. Because of the current catch up mechanism, clock_sched inaccurately ends up advancing by only 5ms instead of 10ms even though there wasn't any actual time stolen. The second task is getting charged for less time than it ran, even though it didn't deserve it. This can result in tasks getting more run time than they should actually get. So, we instead don't make future updates pay back past excess stolen time. Signed-off-by: Suleiman Souhlal <suleiman@xxxxxxxxxx> --- v2: - Slightly changed to simply moving one line up instead of adding new variable. v1: https://lore.kernel.org/lkml/20240806111157.1336532-1-suleiman@xxxxxxxxxx --- kernel/sched/core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index f3951e4a55e5..6c34de8b3fbb 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -730,11 +730,11 @@ static void update_rq_clock_task(struct rq *rq, s64 delta) if (static_key_false((¶virt_steal_rq_enabled))) { steal = paravirt_steal_clock(cpu_of(rq)); steal -= rq->prev_steal_time_rq; + rq->prev_steal_time_rq += steal; if (unlikely(steal > delta)) steal = delta; - rq->prev_steal_time_rq += steal; delta -= steal; } #endif -- 2.46.0.598.g6f2099f65c-goog