Scheduler runqueue maintains its own software clock that is periodically synchronised with hardware. Export this clock so that it can be used by interrupt flood detection for saving the cost of reading from hardware. Cc: Long Li <longli@xxxxxxxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxxxxx>, Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Keith Busch <keith.busch@xxxxxxxxx> Cc: Christoph Hellwig <hch@xxxxxx> Cc: Sagi Grimberg <sagi@xxxxxxxxxxx> Cc: John Garry <john.garry@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Hannes Reinecke <hare@xxxxxxxx> Signed-off-by: Ming Lei <ming.lei@xxxxxxxxxx> --- include/linux/sched.h | 2 ++ kernel/sched/core.c | 5 +++++ 2 files changed, 7 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 467d26046416..efe1a3ec0e9e 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2011,4 +2011,6 @@ int sched_trace_rq_cpu(struct rq *rq); const struct cpumask *sched_trace_rd_span(struct root_domain *rd); +u64 sched_local_rq_clock(void); + #endif diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 90e4b00ace89..03e2e3c36067 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -219,6 +219,11 @@ void update_rq_clock(struct rq *rq) update_rq_clock_task(rq, delta); } +u64 sched_local_rq_clock(void) +{ + return this_rq()->clock; +} +EXPORT_SYMBOL_GPL(sched_local_rq_clock); #ifdef CONFIG_SCHED_HRTICK /* -- 2.20.1