On Tue, Aug 27, 2019 at 04:42:02PM +0200, Thomas Gleixner wrote: > On Tue, 27 Aug 2019, Ming Lei wrote: > > +/* > > + * Update average irq interval with the Exponential Weighted Moving > > + * Average(EWMA) > > + */ > > +static void irq_update_interval(void) > > +{ > > +#define IRQ_INTERVAL_EWMA_WEIGHT 128 > > +#define IRQ_INTERVAL_EWMA_PREV_FACTOR 127 > > +#define IRQ_INTERVAL_EWMA_CURR_FACTOR (IRQ_INTERVAL_EWMA_WEIGHT - \ > > + IRQ_INTERVAL_EWMA_PREV_FACTOR) > > Please do not stick defines into a function body. That's horrible. OK. > > > + > > + int cpu = raw_smp_processor_id(); > > + struct irq_interval *inter = per_cpu_ptr(&avg_irq_interval, cpu); > > + u64 delta = sched_clock_cpu(cpu) - inter->last_irq_end; > > Why are you doing that raw_smp_processor_id() dance? The call site has > interrupts and preemption disabled. OK, will change to __smp_processor_id(). > > Also how is that supposed to work when sched_clock is jiffies based? Good catch, looks ktime_get_ns() is needed. > > > + inter->avg = (inter->avg * IRQ_INTERVAL_EWMA_PREV_FACTOR + > > + delta * IRQ_INTERVAL_EWMA_CURR_FACTOR) / > > + IRQ_INTERVAL_EWMA_WEIGHT; > > We definitely are not going to have a 64bit multiplication and division on > every interrupt. Asided of that this breaks 32bit builds all over the place. I will convert the above into add/subtract/shift only. thanks, Ming