2017-05-12 12:20+0200, David Hildenbrand: > Let's provide a basic lock implementation that should work on most > architectures. > > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> > --- > lib/asm-generic/spinlock.h | 16 +++++++++++++++- > 1 file changed, 15 insertions(+), 1 deletion(-) > > diff --git a/lib/asm-generic/spinlock.h b/lib/asm-generic/spinlock.h > index 3141744..e8c3a58 100644 > --- a/lib/asm-generic/spinlock.h > +++ b/lib/asm-generic/spinlock.h > @@ -1,4 +1,18 @@ > #ifndef _ASM_GENERIC_SPINLOCK_H_ > #define _ASM_GENERIC_SPINLOCK_H_ > -#error need architecture specific asm/spinlock.h > + > +struct spinlock { > + unsigned int v; > +}; > + > +static inline void spin_lock(struct spinlock *lock) > +{ > + while (!__sync_bool_compare_and_swap(&lock->v, 0, 1)); > +} > + > +static inline void spin_unlock(struct spinlock *lock) > +{ > + __sync_bool_compare_and_swap(&lock->v, 1, 0); > +} x86 would be better with __sync_lock_test_and_set() and __sync_lock_release() as they generate the same code we have now, instead of two locked cmpxchgs. GCC mentions that some targets might have problems with that, but they seem to fall back to boolean value and compare-and-swap. Any reason to avoid "while(__sync_lock_test_and_set(&lock->v, 1));"? Thanks.