spin lock struct and locking (2.6.9)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am looking into spin lock structure and locking mechamism as defined
in inclide/linux/spinlock.h and asm/spinlock.h
typedef struct {
	unsigned long magic;
	volatile unsigned long lock;
	volatile unsigned int babble;
	const char *module;
	char *owner;
	int oline;
} spinlock_t;

I find all locking/unlocking mechanism is nothing but just decrementing
/ incrementing operations. Of course they are done with local interrupts
disabled.

#define _spin_lock_irqsave(lock, flags) \
do {    \
	local_irq_save(flags); \
	preempt_disable(); \
	_raw_spin_lock(lock); \
} while (0)

_raw_spin_lock() is,

#define _raw_spin_lock(x)               \
	do { \
	 	CHECK_LOCK(x); \
		if ((x)->lock&&(x)->babble) { \
			(x)->babble--; \
			printk("%s:%d: spin_lock(%s:%p) already locked by %s/%d\n", \
					__FILE__,__LINE__, (x)->module, \
					(x), (x)->owner, (x)->oline); \
		} \
		(x)->lock = 1; \
		(x)->owner = __FILE__; \
		(x)->oline = __LINE__; \
	} while (0)

In an smp environment, what prevents 2 different cpu's from aquiring
same spin lock at exactly the same time? I was expecting `lock` to be
atomic, and the inc and dec operations were also atomic operations.

Regards,
Om. 

--
Kernelnewbies: Help each other learn about the Linux kernel.
Archive:       http://mail.nl.linux.org/kernelnewbies/
FAQ:           http://kernelnewbies.org/faq/


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux