question on mlx5 spinlock "in_use" checking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The rdma-core mlx5 provider has the code:

static inline int mlx5_spin_lock(struct mlx5_spinlock *lock)
{
        if (lock->need_lock)
                return pthread_spin_lock(&lock->lock);

        if (unlikely(lock->in_use)) {
                fprintf(stderr, "*** ERROR: multithreading vilation ***\n"
                        "You are running a multithreaded application but\n"
                        "you set MLX5_SINGLE_THREADED=1. Please unset it.\n");
                abort();
        } else {
                lock->in_use = 1;
                /*
                 * This fence is not at all correct, but it increases the
                 * chance that in_use is detected by another thread without
                 * much runtime cost. */
                atomic_thread_fence(memory_order_acq_rel);
        }

        return 0;
}

static inline int mlx5_spin_unlock(struct mlx5_spinlock *lock)
{
        if (lock->need_lock)
                return pthread_spin_unlock(&lock->lock);
// second thread acquires the lock and checks in_use here
        lock->in_use = 0;

        return 0;
}

What prevents one thread from calling mlx5_spin_unlock(), and then
having a second thread call mlx5_spin_lock() and observe in_use == 1
before the first thread gets to set in_use = 0?

This seems like a hard-to-hit but high-impact bug because the
application that hits it will spuriously call abort().

 - R.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux