On Friday 22 April 2016 07:46 PM, Peter Zijlstra wrote: > On Fri, Apr 22, 2016 at 10:50:41AM +0000, Vineet Gupta wrote: > >>> > > +#define ATOMIC_FETCH_OP(op, c_op, asm_op) \ >>> > > +static inline int atomic_fetch_##op(int i, atomic_t *v) \ >>> > > +{ \ >>> > > + unsigned int val, result; \ >>> > > + SCOND_FAIL_RETRY_VAR_DEF \ >>> > > + \ >>> > > + /* \ >>> > > + * Explicit full memory barrier needed before/after as \ >>> > > + * LLOCK/SCOND thmeselves don't provide any such semantics \ >>> > > + */ \ >>> > > + smp_mb(); \ >>> > > + \ >>> > > + __asm__ __volatile__( \ >>> > > + "1: llock %[val], [%[ctr]] \n" \ >>> > > + " mov %[result], %[val] \n" \ >> > >> > Calling it result could be a bit confusing, this is meant to be the "orig" value. >> > So it indeed "result" of the API, but for atomic operation it is pristine value. >> > >> > Also we can optimize away that MOV - given there are plenty of regs, so >> > >>> > > + " " #asm_op " %[val], %[val], %[i] \n" \ >>> > > + " scond %[val], [%[ctr]] \n" \ >> > >> > Instead have >> > >> > + " " #asm_op " %[result], %[val], %[i] \n" \ >> > + " scond %[result], [%[ctr]] \n" \ >> > >> > > Indeed, how about something like so? > > --- > Subject: locking,arc: Implement atomic_fetch_{add,sub,and,andnot,or,xor}() > From: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Date: Mon Apr 18 01:16:09 CEST 2016 > > Implement FETCH-OP atomic primitives, these are very similar to the > existing OP-RETURN primitives we already have, except they return the > value of the atomic variable _before_ modification. > > This is especially useful for irreversible operations -- such as > bitops (because it becomes impossible to reconstruct the state prior > to modification). > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx> Acked-by: Vineet Gupta <vgupta@xxxxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html