On Fri, 2009-05-01 at 18:25 -0400, Kyle McMartin wrote: > On Fri, May 01, 2009 at 10:03:12PM +0000, James Bottomley wrote: > > On Fri, 2009-05-01 at 17:46 -0400, Kyle McMartin wrote: > > > On Fri, May 01, 2009 at 05:37:18PM -0400, John David Anglin wrote: > > > > On Fri, 01 May 2009, Carlos O'Donell wrote: > > > > > > > > > Historical note... > > > > > > > > > > We clobber all of memory in userspace, like this: > > > > > ~~~ > > > > > #define __ldcw(a) \ > > > > > ({ \ > > > > > unsigned int __ret; \ > > > > > __asm__ __volatile__("ldcw 0(%1),%0" \ > > > > > : "=r" (__ret) : "r" (a) : "memory"); \ > > > > > __ret; \ > > > > > }) > > > > > ~~~ > > > > > I wonder if I should change that to match the kernel? > > > > > > > > The above is perfectly safe. I believe the kernel provides a memory > > > > barrier when necessary. There's a discussion somewhere in the mail > > > > archives. > > > > > > > > > > I, er, don't think we do, not for the spinlock primitives at least, as > > > far as I can tell... > > > > Yes we do ... look in asm/spinlock.h > > > > it's all the mb(); statements that are scattered through our _raw_spin_ > > ops > > > > The original problem was that the spinlocks were compile barrier leaky > > and caused infrequent but hard to debug issues on smp. The barriers are > > likely overkill (since we have two in each) but at least they prevent > > problems. > > > > Yeah, I was looking at the lack of a barrier between ldcw and the test > of *a == 0. I guess this would be fixed by Helge's patch. OK, now I'm confused. Barriers are used to inform the compiler about interlocks it isn't aware of (like when an asm changes a variable). The ldcw and the *a both mention a which is sufficient an interlock for the compiler to get it right without any barrier. Added to which, *a is declared volatile, which is enough of a cautionary note to make the compiler behave in a very straightforward fashion. James -- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html