Hi Christoph, On Tue, Sep 03, 2013 at 03:39:57PM +0100, Christoph Lameter wrote: > On Fri, 30 Aug 2013, Will Deacon wrote: > > ...so I don't think this is quite right, and indeed, we get a bunch of errors > > from GCC: > > > > arch/arm/kernel/hw_breakpoint.c: In function ‘arch_install_hw_breakpoint’: > > arch/arm/kernel/hw_breakpoint.c:347:33: error: incompatible types when assigning to type ‘struct perf_event *[16]’ from type ‘struct perf_event **’ > > arch/arm/kernel/hw_breakpoint.c:347:1: error: incompatible types when assigning to type ‘struct perf_event *[16]’ from type ‘struct perf_event **’ > > arch/arm/kernel/hw_breakpoint.c:347:1: error: incompatible types when assigning to type ‘struct perf_event *[16]’ from type ‘struct perf_event **’ > > arch/arm/kernel/hw_breakpoint.c:347:1: error: incompatible types when assigning to type ‘struct perf_event *[16]’ from type ‘struct perf_event **’ > > Did you apply the first patch of this series which is a bug fix? No, sorry, I didn't see that. Do you have a branch anywhere that I can play with? > > changing to match your recipe still doesn't work, however: > > > > arch/arm/kernel/hw_breakpoint.c: In function ‘arch_install_hw_breakpoint’: > > arch/arm/kernel/hw_breakpoint.c:347:33: error: cast specifies array type > > Yep that is the macro bug that was fixed in the first patch. Ok. Sorry for the noise. > > > > > > WARN_ON(preemptible()); > > > > > > - if (local_inc_return(&__get_cpu_var(mde_ref_count)) == 1) > > > + if (this_cpu_inc_return(mde_ref_count) == 1) > > > enable = DBG_MDSCR_MDE; > > > > I'm not sure that this is safe. We rely on local_inc_return to be atomic > > with respect to the current CPU, which will end up being a wrapper around > > atomic64_inc_return. However, this_cpu_inc_return simply uses a lock, so > > other people accessing the count in a different manner (local_dec_and_test > > below) may break local atomicity unless we start disabling interrupts or > > something horrible like that. > > I do not see any special code for ARM for this_cpu_inc_return. The > fallback solution in the core code is to disable interrupts for the > inc_return and arch/arm/include/asm/percpu.h includes > asm-generic/percpu.h. > > Where did you see it using a lock? God knows! You're completely right, and we simply disable interrupts which I somehow misread as taking a lock. However, is it guaranteed that mixing an atomic64_* access with a this_cpu_inc_return will retain atomicity between the two? E.g. if you get interrupted during an atomic64_xchg operation, the interrupt handler issues this_cpu_inc_return, then on return to the xchg operation it must reissue any reads that had been executed prior to the interrupt. This should work on ARM/ARM64 (returning from the interrupt will clear the exclusive monitor) but I don't know about other architectures. Will -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html