This is a note to let you know that I have just added a patch titled MIPS: hpet: Choose a safe value for the ETIME check to the linux-4.2.y-queue branch of the 4.2.y-ckt extended stable tree which can be found at: http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-4.2.y-queue This patch is scheduled to be released in version 4.2.8-ckt3. If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 4.2.y-ckt tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Kamal ---8<------------------------------------------------------------ >From bae37b5852e3e2b44ff796fca0a4318d479a807a Mon Sep 17 00:00:00 2001 From: Huacai Chen <chenhc@xxxxxxxxxx> Date: Thu, 21 Jan 2016 21:09:50 +0800 Subject: MIPS: hpet: Choose a safe value for the ETIME check commit 5610b1254e3689b6ef8ebe2db260709a74da06c8 upstream. This patch is borrowed from x86 hpet driver and explaind below: Due to the overly intelligent design of HPETs, we need to workaround the problem that the compare value which we write is already behind the actual counter value at the point where the value hits the real compare register. This happens for two reasons: 1) We read out the counter, add the delta and write the result to the compare register. When a NMI hits between the read out and the write then the counter can be ahead of the event already. 2) The write to the compare register is delayed by up to two HPET cycles in AMD chipsets. We can work around this by reading back the compare register to make sure that the written value has hit the hardware. But that is bad performance wise for the normal case where the event is far enough in the future. As we already know that the write can be delayed by up to two cycles we can avoid the read back of the compare register completely if we make the decision whether the delta has elapsed already or not based on the following calculation: cmp = event - actual_count; If cmp is less than 64 HPET clock cycles, then we decide that the event has happened already and return -ETIME. That covers the above #1 and #2 problems which would cause a wait for HPET wraparound (~306 seconds). Signed-off-by: Huacai Chen <chenhc@xxxxxxxxxx> Cc: Aurelien Jarno <aurelien@xxxxxxxxxxx> Cc: Steven J. Hill <Steven.Hill@xxxxxxxxxx> Cc: Fuxin Zhang <zhangfx@xxxxxxxxxx> Cc: Zhangjin Wu <wuzhangjin@xxxxxxxxx> Cc: Huacai Chen <chenhc@xxxxxxxxxx> Cc: linux-mips@xxxxxxxxxxxxxx Patchwork: https://patchwork.linux-mips.org/patch/12162/ Signed-off-by: Ralf Baechle <ralf@xxxxxxxxxxxxxx> Signed-off-by: Kamal Mostafa <kamal@xxxxxxxxxxxxx> --- arch/mips/loongson64/loongson-3/hpet.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/arch/mips/loongson64/loongson-3/hpet.c b/arch/mips/loongson64/loongson-3/hpet.c index 5c21cd3..b2888d5 100644 --- a/arch/mips/loongson64/loongson-3/hpet.c +++ b/arch/mips/loongson64/loongson-3/hpet.c @@ -13,6 +13,9 @@ #define SMBUS_PCI_REG64 0x64 #define SMBUS_PCI_REGB4 0xb4 +#define HPET_MIN_CYCLES 64 +#define HPET_MIN_PROG_DELTA (HPET_MIN_CYCLES + (HPET_MIN_CYCLES >> 1)) + static DEFINE_SPINLOCK(hpet_lock); DEFINE_PER_CPU(struct clock_event_device, hpet_clockevent_device); @@ -139,8 +142,9 @@ static int hpet_next_event(unsigned long delta, cnt += delta; hpet_write(HPET_T0_CMP, cnt); - res = ((int)(hpet_read(HPET_COUNTER) - cnt) > 0) ? -ETIME : 0; - return res; + res = (int)(cnt - hpet_read(HPET_COUNTER)); + + return res < HPET_MIN_CYCLES ? -ETIME : 0; } static irqreturn_t hpet_irq_handler(int irq, void *data) @@ -212,7 +216,7 @@ void __init setup_hpet_timer(void) cd->cpumask = cpumask_of(cpu); clockevent_set_clock(cd, HPET_FREQ); cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd); - cd->min_delta_ns = 5000; + cd->min_delta_ns = clockevent_delta2ns(HPET_MIN_PROG_DELTA, cd); clockevents_register_device(cd); setup_irq(HPET_T0_IRQ, &hpet_irq); -- 1.9.1