Re: KVM Arm64 and Linux-RT issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sebastian,

On 19/08/2019 08:33, Sebastian Andrzej Siewior wrote:
On 2019-08-16 17:32:38 [+0100], Julien Grall wrote:
Hi Sebastian,
Hi Julien,

hrtimer_callback_running() will be returning true as the callback is
running somewhere else. This means hrtimer_try_to_cancel()
would return -1. Therefore hrtimer_grab_expiry_lock() would
be called.

Did I miss anything?

nope, you are right. I assumed that we had code to deal with this but
didn't find it…

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 7d7db88021311..40d83c709503e 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -934,7 +934,7 @@ void hrtimer_grab_expiry_lock(const struct hrtimer *timer)
  {
  	struct hrtimer_clock_base *base = timer->base;
- if (base && base->cpu_base) {
+	if (base && base->cpu_base && base->index < MASK_SHIFT) {

Lower indexes are used for hard interrupt. So this would need to be base->index >= MASK_SHIFT.

But I was wondering whether checking timer->is_soft would make the code more readable?

While investigation how this is meant to work, I noticed a few others things.

timer->base could potentially change under our feet at any point of time (we don't hold any lock). So it would be valid to have base == migration_base.

migration_cpu_base does not have softirq_expiry_lock initialized. So we would end up to use an uninitialized lock. Note that migration_base->index is always 0, so the check base->index > MASK_SHIFT would hide it.

Alternatively, we could initialize the spin lock for migration_cpu_base avoiding to rely on side effect of the check.

Another potential issue is the compiler is free to reload timer->base at any time. So I think we want an ACCESS_ONCE(...).

Lastly timer->base cannot be NULL. From the comment on top of migration_cpu_base, timer->base->cpu_base will as well not be NULL.

So I think the function can be reworked as:

void hrtimer_grab_expirty_lock(const struct hrtimer *timer)
{
        struct hrtimer_clock_base *base = ACCESS_ONCE(timer->base);

        if (!timer->is_soft && base != migration_base ) {
          spin_lock();
          spin_unlock();
        }
}


  		spin_lock(&base->cpu_base->softirq_expiry_lock);
  		spin_unlock(&base->cpu_base->softirq_expiry_lock);
  	}

This should deal with it.

Cheers,

Sebastian


--
Julien Grall



[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux